COSC2123: Algorithms & Analysis – Dynamic Programming
COSC2123: Algorithms & Analysis
Dynamic Programming
Hoang MIT University
Email : sonhoang. .au
Lecture 8
(RMIT University) Dynamic Programming Lecture 8 1 / 100
Overview
Levitin – The design and analysis of algorithms
This week we will be covering the material from Chapter 8.
Learning outcomes:
• Understand and be able to apply dynamic programming to solving
problems.
• Examples:
• Coin-row problem
• Computing the edit distance
• Knapsack
• Transitive closure – Warshall’s algorithm
(RMIT University) Dynamic Programming Lecture 8 2 / 100
Outline
1 Overview
2 Edit Distance
3 Knapsack Problem
4 Warshall’s Algorithm
5 Summary
(RMIT University) Dynamic Programming Lecture 8 3 / 100
Overview
1 Overview
2 Edit Distance
3 Knapsack Problem
4 Warshall’s Algorithm
5 Summary
(RMIT University) Dynamic Programming Lecture 8 4 / 100
Dynamic Programming
Dynamic Programming is a general algorithm approach for solving
problems using the solutions of (overlapping) subproblems.
Main idea:
1 Setup a recurrence relating a solution of larger instances to the
solutions of smaller instances.
2 Solve smaller instances once.
3 Record solutions in a table.
4 Extract solutions to the initial instance from the table, i.e., use
solutions of smaller instances to construct solutions of larger initial
problem instance.
(RMIT University) Dynamic Programming Lecture 8 5 / 100
Dynamic Programming
Dynamic Programming is a general algorithm approach for solving
problems using the solutions of (overlapping) subproblems.
Main idea:
1 Setup a recurrence relating a solution of larger instances to the
solutions of smaller instances.
2 Solve smaller instances once.
3 Record solutions in a table.
4 Extract solutions to the initial instance from the table, i.e., use
solutions of smaller instances to construct solutions of larger initial
problem instance.
(RMIT University) Dynamic Programming Lecture 8 5 / 100
Dynamic Programming
Sounds similar? Divide-and-conquer?
Difference?
• Dynamic programming can be thought of as divide-and-conquer
and storing sub-solutions.
• Why have both then?
• Divide-and-conquer algorithms are preferred when the
sub-problems/instances are independent, e.g., Mergesort.
• Dynamic programming approach better when the
sub-problems/instances are dependent, i.e., the solution to a
sub-problem may be needed multiple times. Hence saving solutions
allow them to be reused rather than recomputed. Tradeoff space
(more) for time (faster).
(RMIT University) Dynamic Programming Lecture 8 6 / 100
Dynamic Programming
Sounds similar? Divide-and-conquer?
Difference?
• Dynamic programming can be thought of as divide-and-conquer
and storing sub-solutions.
• Why have both then?
• Divide-and-conquer algorithms are preferred when the
sub-problems/instances are independent, e.g., Mergesort.
• Dynamic programming approach better when the
sub-problems/instances are dependent, i.e., the solution to a
sub-problem may be needed multiple times. Hence saving solutions
allow them to be reused rather than recomputed. Tradeoff space
(more) for time (faster).
(RMIT University) Dynamic Programming Lecture 8 6 / 100
Dynamic Programming
Sounds similar? Divide-and-conquer?
Difference?
• Dynamic programming can be thought of as divide-and-conquer
and storing sub-solutions.
• Why have both then?
• Divide-and-conquer algorithms are preferred when the
sub-problems/instances are independent, e.g., Mergesort.
• Dynamic programming approach better when the
sub-problems/instances are dependent, i.e., the solution to a
sub-problem may be needed multiple times. Hence saving solutions
allow them to be reused rather than recomputed. Tradeoff space
(more) for time (faster).
(RMIT University) Dynamic Programming Lecture 8 6 / 100
Dynamic Programming Approaches
Two basic approaches to dynamic programming.
For both first: Study a recursive divide-and-conquer algorithm and
figure out the dependencies between the subproblems.
1 Top-Down
• Start with a divide-and-conquer algorithm, and begin dividing
recursively.
• Only solve/recurse on a subproblem if the solution is not available
in the table.
• Save solutions to subproblems in a table.
2 Bottom-Up
• Solve all subproblems, and use solutions to subproblems to
construct solutions to larger problems.
(RMIT University) Dynamic Programming Lecture 8 7 / 100
Dynamic Programming Approaches
Two basic approaches to dynamic programming.
For both first: Study a recursive divide-and-conquer algorithm and
figure out the dependencies between the subproblems.
1 Top-Down
• Start with a divide-and-conquer algorithm, and begin dividing
recursively.
• Only solve/recurse on a subproblem if the solution is not available
in the table.
• Save solutions to subproblems in a table.
2 Bottom-Up
• Solve all subproblems, and use solutions to subproblems to
construct solutions to larger problems.
(RMIT University) Dynamic Programming Lecture 8 7 / 100
Dynamic Programming Approaches
Two basic approaches to dynamic programming.
For both first: Study a recursive divide-and-conquer algorithm and
figure out the dependencies between the subproblems.
1 Top-Down
• Start with a divide-and-conquer algorithm, and begin dividing
recursively.
• Only solve/recurse on a subproblem if the solution is not available
in the table.
• Save solutions to subproblems in a table.
2 Bottom-Up
• Solve all subproblems, and use solutions to subproblems to
construct solutions to larger problems.
(RMIT University) Dynamic Programming Lecture 8 7 / 100
Dynamic Programming example: Coin-row Problem
Coin-row Problem
Given a row of n coins with positive integer values c1, c2, . . . , cn (not
necessarily distinct), pick up the maximum amount of money with the
constraint that no two adjacent coins can be selected.
EXAMPLE: 5,1,2,10,6,2. What is the best selection?
(RMIT University) Dynamic Programming Lecture 8 8 / 100
Dynamic Programming example: Coin-row Problem
EXAMPLE: 5,1,2,10,6,2. What is the best selection?
Rough sketch of dynamic programming approach: Say we are
considering whether to select the last coin (value 2) in our selection.
We have two choices: select or not select the last coin (to maximise
total value of coins selected):
• If we don’t select the last coin, then the best solution must be the
same as the sequence of coins selected for sub-problem
5,1,2,10,6.
• If we do select the last coin, then we can’t select the 2nd last coin
(hence cannot use the optimal selection for sub-problem
5,1,2,10,6). However, the best solution will include this last coin
+ the best solution for the sub-problem 5,1,2,10.
(RMIT University) Dynamic Programming Lecture 8 9 / 100
Dynamic Programming example: Coin-row Problem
EXAMPLE: 5,1,2,10,6,2. What is the best selection?
Rough sketch of dynamic programming approach: Say we are
considering whether to select the last coin (value 2) in our selection.
We have two choices: select or not select the last coin (to maximise
total value of coins selected):
• If we don’t select the last coin, then the best solution must be the
same as the sequence of coins selected for sub-problem
5,1,2,10,6.
• If we do select the last coin, then we can’t select the 2nd last coin
(hence cannot use the optimal selection for sub-problem
5,1,2,10,6). However, the best solution will include this last coin
+ the best solution for the sub-problem 5,1,2,10.
(RMIT University) Dynamic Programming Lecture 8 9 / 100
Dynamic Programming example: Coin-row Problem
EXAMPLE: 5,1,2,10,6,2. What is the best selection?
Rough sketch of dynamic programming approach: Say we are
considering whether to select the last coin (value 2) in our selection.
We have two choices: select or not select the last coin (to maximise
total value of coins selected):
• If we don’t select the last coin, then the best solution must be the
same as the sequence of coins selected for sub-problem
5,1,2,10,6.
• If we do select the last coin, then we can’t select the 2nd last coin
(hence cannot use the optimal selection for sub-problem
5,1,2,10,6). However, the best solution will include this last coin
+ the best solution for the sub-problem 5,1,2,10.
(RMIT University) Dynamic Programming Lecture 8 9 / 100
Dynamic Programming example: Coin-row Problem
EXAMPLE: 5,1,2,10,6,2. What is the best selection?
Rough sketch of dynamic programming approach: Say we are
considering whether to select the last coin (value 2) in our selection.
We have two choices: select or not select the last coin (to maximise
total value of coins selected):
• If we don’t select the last coin, then the best solution must be the
same as the sequence of coins selected for sub-problem
5,1,2,10,6.
• If we do select the last coin, then we can’t select the 2nd last coin
(hence cannot use the optimal selection for sub-problem
5,1,2,10,6). However, the best solution will include this last coin
+ the best solution for the sub-problem 5,1,2,10.
(RMIT University) Dynamic Programming Lecture 8 9 / 100
Dynamic Programming example: Coin-row Problem
• Let F (n) denote the maximum total amount of money picked up
after considering all n coins in row.
• ci denote the monetary value of coin number i .
Then we can write recurence relationship:
F (n) = max{total value of solution that selects nth coin,
total value of solution not does not select nth coin} for n > 1,
F (0) = 0,F (1) = value from picking up first coin.
Formally:
F (n) = max{cn+total value of solution to the n − 2 coin sub-problem,
total value of solution to the n − 1 coin sub-problem}
for n > 1,
F (0) = 0,F (1) = value from picking up first coin.
(RMIT University) Dynamic Programming Lecture 8 10 / 100
Dynamic Programming example: Coin-row Problem
• Let F (n) denote the maximum total amount of money picked up
after considering all n coins in row.
• ci denote the monetary value of coin number i .
Then we can write recurence relationship:
F (n) = max{total value of solution that selects nth coin,
total value of solution not does not select nth coin} for n > 1,
F (0) = 0,F (1) = value from picking up first coin.
Formally:
F (n) = max{cn+total value of solution to the n − 2 coin sub-problem,
total value of solution to the n − 1 coin sub-problem}
for n > 1,
F (0) = 0,F (1) = value from picking up first coin.
(RMIT University) Dynamic Programming Lecture 8 10 / 100
Dynamic Programming example: Coin-row Problem
• Let F (n) denote the maximum total amount of money picked up
after considering all n coins in row.
• ci denote the monetary value of coin number i .
Then we can write recurence relationship:
F (n) = max{total value of solution that selects nth coin,
total value of solution not does not select nth coin} for n > 1,
F (0) = 0,F (1) = value from picking up first coin.
Formally:
F (n) = max{cn+total value of solution to the n − 2 coin sub-problem,
total value of solution to the n − 1 coin sub-problem}
for n > 1,
F (0) = 0,F (1) = value from picking up first coin.
(RMIT University) Dynamic Programming Lecture 8 10 / 100
Dynamic Programming example: Coin-row Problem
• Let F (n) denote the maximum total amount of money picked up
after considering all n coins in row.
• ci denote the monetary value of coin number i .
Then we can write recurence relationship:
F (n) = max{cn+total value of solution to the n − 2 coin sub-problem,
total value of solution to the n − 1 coin sub-problem}
for n > 1,
F (0) = 0,F (1) = value from picking up first coin.
Formally:
F (n) = max{cn + F (n − 2),F (n − 1)} for n > 1,
F (0) = 0,F (1) = c1.
(RMIT University) Dynamic Programming Lecture 8 11 / 100
Dynamic Programming example: Coin-row Problem
• Let F (n) denote the maximum total amount of money picked up
after considering all n coins in row.
• ci denote the monetary value of coin number i .
Then we can write recurence relationship:
F (n) = max{cn+total value of solution to the n − 2 coin sub-problem,
total value of solution to the n − 1 coin sub-problem}
for n > 1,
F (0) = 0,F (1) = value from picking up first coin.
Formally:
F (n) = max{cn + F (n − 2),F (n − 1)} for n > 1,
F (0) = 0,F (1) = c1.
(RMIT University) Dynamic Programming Lecture 8 11 / 100
Coin-row Problem (continued)
F (n) = max{cn + F (n − 2),F (n − 1)} for n > 1,
F (0) = 0,F (1) = c1.
index (i) 0 1 2 3 4 5 6
coins – 5 1 2 10 6 2
F (i)
0 5 5 7 15 15 17
(RMIT University) Dynamic Programming Lecture 8 12 / 100
Coin-row Problem (continued)
F (n) = max{cn + F (n − 2),F (n − 1)} for n > 1,
F (0) = 0,F (1) = c1.
index (i) 0 1 2 3 4 5 6
coins – 5 1 2 10 6 2
F (i) 0
5 5 7 15 15 17
(RMIT University) Dynamic Programming Lecture 8 12 / 100
Coin-row Problem (continued)
F (n) = max{cn + F (n − 2),F (n − 1)} for n > 1,
F (0) = 0,F (1) = c1.
index (i) 0 1 2 3 4 5 6
coins – 5 1 2 10 6 2
F (i) 0 5
5 7 15 15 17
F [0] = 0,F [1] = c1 = 5
(RMIT University) Dynamic Programming Lecture 8 12 / 100
Coin-row Problem (continued)
F (n) = max{cn + F (n − 2),F (n − 1)} for n > 1,
F (0) = 0,F (1) = c1.
index (i) 0 1 2 3 4 5 6
coins – 5 1 2 10 6 2
F (i) 0 5 5
7 15 15 17
F [2] = max(1 + 0,5) = 5
(RMIT University) Dynamic Programming Lecture 8 12 / 100
Coin-row Problem (continued)
F (n) = max{cn + F (n − 2),F (n − 1)} for n > 1,
F (0) = 0,F (1) = c1.
index (i) 0 1 2 3 4 5 6
coins – 5 1 2 10 6 2
F (i) 0 5 5 7
15 15 17
F [3] = max(2 + 5,5) = 7
(RMIT University) Dynamic Programming Lecture 8 12 / 100
Coin-row Problem (continued)
F (n) = max{cn + F (n − 2),F (n − 1)} for n > 1,
F (0) = 0,F (1) = c1.
index (i) 0 1 2 3 4 5 6
coins – 5 1 2 10 6 2
F (i) 0 5 5 7 15
15 17
F [4] = max(10 + 5,7) = 15
(RMIT University) Dynamic Programming Lecture 8 12 / 100
Coin-row Problem (continued)
F (n) = max{cn + F (n − 2),F (n − 1)} for n > 1,
F (0) = 0,F (1) = c1.
index (i) 0 1 2 3 4 5 6
coins – 5 1 2 10 6 2
F (i) 0 5 5 7 15 15
17
F [5] = max(6 + 7,15) = 15
(RMIT University) Dynamic Programming Lecture 8 12 / 100
Coin-row Problem (continued)
F (n) = max{cn + F (n − 2),F (n − 1)} for n > 1,
F (0) = 0,F (1) = c1.
index (i) 0 1 2 3 4 5 6
coins – 5 1 2 10 6 2
F (i) 0 5 5 7 15 15 17 F [6] = max(2 + 15,15) = 17
(RMIT University) Dynamic Programming Lecture 8 12 / 100
Coin-row Problem (continued)
F (n) = max{cn + F (n − 2),F (n − 1)} for n > 1,
F (0) = 0,F (1) = c1.
index (i) 0 1 2 3 4 5 6
coins – 5 1 2 10 6 2
F (i) 0 5 5 7 15 15 17
(RMIT University) Dynamic Programming Lecture 8 12 / 100
Coin-row Problem (continued)
• In order to retrieve the coins in the maximum subset, we must
backtrack from the last coin.
• This is called a backtrace.
• First, we find the previous value.
• The previous value is the maximum of cn + F (n − 2) or F (n − 1).
• Say F (n − 1) is the maximum, then this must have been the
previous cell we got to F (n).
• Then backtracking from F (n− 1), we see how we got to F (n− 1) –
either cn−1 + F (n − 3) or F (n − 2)
• Continue until we reach F (0)
(RMIT University) Dynamic Programming Lecture 8 13 / 100
Coin-row Problem (continued)
• In order to retrieve the coins in the maximum subset, we must
backtrack from the last coin.
• This is called a backtrace.
• First, we find the previous value.
• The previous value is the maximum of cn + F (n − 2) or F (n − 1).
• Say F (n − 1) is the maximum, then this must have been the
previous cell we got to F (n).
• Then backtracking from F (n− 1), we see how we got to F (n− 1) –
either cn−1 + F (n − 3) or F (n − 2)
• Continue until we reach F (0)
(RMIT University) Dynamic Programming Lecture 8 13 / 100
Coin-row Problem (continued)
• In order to retrieve the coins in the maximum subset, we must
backtrack from the last coin.
• This is called a backtrace.
• First, we find the previous value.
• The previous value is the maximum of cn + F (n − 2) or F (n − 1).
• Say F (n − 1) is the maximum, then this must have been the
previous cell we got to F (n).
• Then backtracking from F (n− 1), we see how we got to F (n− 1) –
either cn−1 + F (n − 3) or F (n − 2)
• Continue until we reach F (0)
(RMIT University) Dynamic Programming Lecture 8 13 / 100
Coin-row Problem (continued)
• In order to retrieve the coins in the maximum subset, we must
backtrack from the last coin.
• This is called a backtrace.
• First, we find the previous value.
• The previous value is the maximum of cn + F (n − 2) or F (n − 1).
• Say F (n − 1) is the maximum, then this must have been the
previous cell we got to F (n).
• Then backtracking from F (n− 1), we see how we got to F (n− 1) –
either cn−1 + F (n − 3) or F (n − 2)
• Continue until we reach F (0)
(RMIT University) Dynamic Programming Lecture 8 13 / 100
Coin-row Problem (continued)
• In order to retrieve the coins in the maximum subset, we must
backtrack from the last coin.
• This is called a backtrace.
• First, we find the previous value.
• The previous value is the maximum of cn + F (n − 2) or F (n − 1).
• Say F (n − 1) is the maximum, then this must have been the
previous cell we got to F (n).
• Then backtracking from F (n− 1), we see how we got to F (n− 1) –
either cn−1 + F (n − 3) or F (n − 2)
• Continue until we reach F (0)
(RMIT University) Dynamic Programming Lecture 8 13 / 100
Coin-row Problem (continued)
• In order to retrieve the coins in the maximum subset, we must
backtrack from the last coin.
• This is called a backtrace.
• First, we find the previous value.
• The previous value is the maximum of cn + F (n − 2) or F (n − 1).
• Say F (n − 1) is the maximum, then this must have been the
previous cell we got to F (n).
• Then backtracking from F (n− 1), we see how we got to F (n− 1) –
either cn−1 + F (n − 3) or F (n − 2)
• Continue until we reach F (0)
(RMIT University) Dynamic Programming Lecture 8 13 / 100
Coin-row Problem (continued)
• The previous value is the maximum of cn + F (n − 2) or F (n − 1).
•
index (i) 0 1 2 3 4 5 6
coins – 5 1 2 10 6 2
F (i) 0 5 5 7 15 15 17
• So, we used F (5) = 15 or c6 + F (4) = 17. We used the latter, so
c6 = 2 was selected.
• We look at solution to F (4). Using same process, the maximum at
F (4) came from c4 + F (2), so c4 = 10 was selected.
• Finally, we have F (2) = F (1), so best solution for F (2) must be
not to select c2 but select c1 = 5.
• The optimal solution is {c1, c4, c6}.
Question
What is the worst case time and space complexity of this solution?
(RMIT University) Dynamic Programming Lecture 8 14 / 100
Coin-row Problem (continued)
• The previous value is the maximum of cn + F (n − 2) or F (n − 1).
•
index (i) 0 1 2 3 4 5 6
coins – 5 1 2 10 6 2
F (i) 0 5 5 7 15 15 17
• So, we used F (5) = 15 or c6 + F (4) = 17. We used the latter, so
c6 = 2 was selected.
• We look at solution to F (4). Using same process, the maximum at
F (4) came from c4 + F (2), so c4 = 10 was selected.
• Finally, we have F (2) = F (1), so best solution for F (2) must be
not to select c2 but select c1 = 5.
• The optimal solution is {c1, c4, c6}.
Question
What is the worst case time and space complexity of this solution?
(RMIT University) Dynamic Programming Lecture 8 14 / 100
Coin-row Problem (continued)
• The previous value is the maximum of cn + F (n − 2) or F (n − 1).
•
index (i) 0 1 2 3 4 5 6
coins – 5 1 2 10 6 2
F (i) 0 5 5 7 15 15 17
• So, we used F (5) = 15 or c6 + F (4) = 17. We used the latter, so
c6 = 2 was selected.
• We look at solution to F (4). Using same process, the maximum at
F (4) came from c4 + F (2), so c4 = 10 was selected.
• Finally, we have F (2) = F (1), so best solution for F (2) must be
not to select c2 but select c1 = 5.
• The optimal solution is {c1, c4, c6}.
Question
What is the worst case time and space complexity of this solution?
(RMIT University) Dynamic Programming Lecture 8 14 / 100
Coin-row Problem (continued)
• The previous value is the maximum of cn + F (n − 2) or F (n − 1).
•
index (i) 0 1 2 3 4 5 6
coins – 5 1 2 10 6 2
F (i) 0 5 5 7 15 15 17
• So, we used F (5) = 15 or c6 + F (4) = 17. We used the latter, so
c6 = 2 was selected.
• We look at solution to F (4). Using same process, the maximum at
F (4) came from c4 + F (2), so c4 = 10 was selected.
• Finally, we have F (2) = F (1), so best solution for F (2) must be
not to select c2 but select c1 = 5.
• The optimal solution is {c1, c4, c6}.
Question
What is the worst case time and space complexity of this solution?
(RMIT University) Dynamic Programming Lecture 8 14 / 100
Coin-row Problem (continued)
• The previous value is the maximum of cn + F (n − 2) or F (n − 1).
•
index (i) 0 1 2 3 4 5 6
coins – 5 1 2 10 6 2
F (i) 0 5 5 7 15 15 17
• So, we used F (5) = 15 or c6 + F (4) = 17. We used the latter, so
c6 = 2 was selected.
• We look at solution to F (4). Using same process, the maximum at
F (4) came from c4 + F (2), so c4 = 10 was selected.
• Finally, we have F (2) = F (1), so best solution for F (2) must be
not to select c2 but select c1 = 5.
• The optimal solution is {c1, c4, c6}.
Question
What is the worst case time and space complexity of this solution?
(RMIT University) Dynamic Programming Lecture 8 14 / 100
Overview
1 Overview
2 Edit Distance
3 Knapsack Problem
4 Warshall’s Algorithm
5 Summary
(RMIT University) Dynamic Programming Lecture 8 15 / 100
Edit Distance
Consider the problem of comparing two sequences/strings.
Applications
• Spell checkers, diff file revisions, plagiarism detection
• Bioinformatics: comparisons of DNA sequences
• Speech recognition
Solution? Direct position-wise comparison will not work.
One possible solution is the edit distance.
(RMIT University) Dynamic Programming Lecture 8 16 / 100
Edit Distance
Consider the problem of comparing two sequences/strings.
Applications
• Spell checkers, diff file revisions, plagiarism detection
• Bioinformatics: comparisons of DNA sequences
• Speech recognition
Solution? Direct position-wise comparison will not work.
One possible solution is the edit distance.
(RMIT University) Dynamic Programming Lecture 8 16 / 100
Edit Distance
Consider the problem of comparing two sequences/strings.
Applications
• Spell checkers, diff file revisions, plagiarism detection
• Bioinformatics: comparisons of DNA sequences
• Speech recognition
Solution? Direct position-wise comparison will not work.
One possible solution is the edit distance.
(RMIT University) Dynamic Programming Lecture 8 16 / 100
Edit Distance
Definition
The Levenshtein distance or edit distance of two strings/sequences S1
and S2 is the minimum number of point mutations required to change
S1 into S2, where a point mutation is one of the following
1 substitution of a character, or
2 insertion of a character, or
3 deletion of a character.
• A classic problem for dynamic programming solutions.
(RMIT University) Dynamic Programming Lecture 8 17 / 100
Edit Distance
• The Edit distance is a generalization of the Hamming distance.
The Hamming distance compares the number of differences in the
two strings. It compares the i th position of string S1 with the i th
position of string S2.
Example:
S1 = A T A T A T A T
↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓
S2 = T A T A T A T A
• The Hamming distance d(S1,S2) = 8.
• The edit distance, on the other hand, allows
insertions/deletions/substitutions of characters. Example:
S1 = − A T A T A T A T
↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓
S2 = T A T A T A T A −
• The Edit distance Ed(S1,S2) = 2: insert T at the front and
remove T at the end of S1 to turn it into S2
(RMIT University) Dynamic Programming Lecture 8 18 / 100
Edit Distance
• The Edit distance is a generalization of the Hamming distance.
The Hamming distance compares the number of differences in the
two strings. It compares the i th position of string S1 with the i th
position of string S2. Example:
S1 = A T A T A T A T
↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓
S2 = T A T A T A T A
• The Hamming distance d(S1,S2) = 8.
• The edit distance, on the other hand, allows
insertions/deletions/substitutions of characters. Example:
S1 = − A T A T A T A T
↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓
S2 = T A T A T A T A −
• The Edit distance Ed(S1,S2) = 2: insert T at the front and
remove T at the end of S1 to turn it into S2
(RMIT University) Dynamic Programming Lecture 8 18 / 100
Edit Distance
• The Edit distance is a generalization of the Hamming distance.
The Hamming distance compares the number of differences in the
two strings. It compares the i th position of string S1 with the i th
position of string S2. Example:
S1 = A T A T A T A T
↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓
S2 = T A T A T A T A
• The Hamming distance d(S1,S2) = 8.
• The edit distance, on the other hand, allows
insertions/deletions/substitutions of characters. Example:
S1 = − A T A T A T A T
↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓
S2 = T A T A T A T A −
• The Edit distance Ed(S1,S2) = 2: insert T at the front and
remove T at the end of S1 to turn it into S2
(RMIT University) Dynamic Programming Lecture 8 18 / 100
Edit Distance
• The Edit distance is a generalization of the Hamming distance.
The Hamming distance compares the number of differences in the
two strings. It compares the i th position of string S1 with the i th
position of string S2. Example:
S1 = A T A T A T A T
↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓
S2 = T A T A T A T A
• The Hamming distance d(S1,S2) = 8.
• The edit distance, on the other hand, allows
insertions/deletions/substitutions of characters. Example:
S1 = − A T A T A T A T
↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓
S2 = T A T A T A T A −
• The Edit distance Ed(S1,S2) = 2: insert T at the front and
remove T at the end of S1 to turn it into S2
(RMIT University) Dynamic Programming Lecture 8 18 / 100
Edit Distance
Idea: Imagine we have two strings X , of length n, and Y , of length m.
Let M[n,m] represent the edit distance between the strings X [1 . . . n]
and Y [1 . . .m].
We essentially do a character by character comparison, but one that
allows for insertions/deletions/substitutions. Similar to coin-row
problem, we want to reuse solutions to the edit distance between
substrings of X and Y . But how?
Lets study how to match X [1 . . . n] and Y [1 . . .m] and develop a
recurrence relationship that will tell us how.
(RMIT University) Dynamic Programming Lecture 8 19 / 100
Edit Distance
Idea: Imagine we have two strings X , of length n, and Y , of length m.
Let M[n,m] represent the edit distance between the strings X [1 . . . n]
and Y [1 . . .m].
We essentially do a character by character comparison, but one that
allows for insertions/deletions/substitutions. Similar to coin-row
problem, we want to reuse solutions to the edit distance between
substrings of X and Y . But how?
Lets study how to match X [1 . . . n] and Y [1 . . .m] and develop a
recurrence relationship that will tell us how.
(RMIT University) Dynamic Programming Lecture 8 19 / 100
Edit Distance
Idea: Imagine we have two strings X , of length n, and Y , of length m.
Let M[n,m] represent the edit distance between the strings X [1 . . . n]
and Y [1 . . .m].
We essentially do a character by character comparison, but one that
allows for insertions/deletions/substitutions. Similar to coin-row
problem, we want to reuse solutions to the edit distance between
substrings of X and Y . But how?
Lets study how to match X [1 . . . n] and Y [1 . . .m] and develop a
recurrence relationship that will tell us how.
(RMIT University) Dynamic Programming Lecture 8 19 / 100
Edit Distance
Idea: Imagine we have two strings X , of length n, and Y , of length m.
Let M[n,m] represent the edit distance between the strings X [1 . . . n]
and Y [1 . . .m].
We essentially do a character by character comparison, but one that
allows for insertions/deletions/substitutions. Similar to coin-row
problem, we want to reuse solutions to the edit distance between
substrings of X and Y . But how?
Lets study how to match X [1 . . . n] and Y [1 . . .m] and develop a
recurrence relationship that will tell us how.
(RMIT University) Dynamic Programming Lecture 8 19 / 100
Edit Distance
Idea: Consider the following scenarios when computing the edit
distance between X and Y and we are comparing the final characters
of the strings, X [n] and Y [m]:
Case 1 (X [n] == Y [m]): This means the last characters match, so no
additional distance to add to total edit distance so far.
• Hence, edit distance M[n,m] = edit distance of matching the
substrings X [1 . . . n − 1] and Y [1 . . .m − 1].
• Or, M[n,m] = M[n − 1,m − 1].
• E.g., matching strings
X = a b c
Y = u v c
(RMIT University) Dynamic Programming Lecture 8 20 / 100
Edit Distance
Idea: Consider the following scenarios when computing the edit
distance between X and Y and we are comparing the final characters
of the strings, X [n] and Y [m]:
Case 1 (X [n] == Y [m]): This means the last characters match, so no
additional distance to add to total edit distance so far.
• Hence, edit distance M[n,m] = edit distance of matching the
substrings X [1 . . . n − 1] and Y [1 . . .m − 1].
• Or, M[n,m] = M[n − 1,m − 1].
• E.g., matching strings
X = a b c
Y = u v c
(RMIT University) Dynamic Programming Lecture 8 20 / 100
Edit Distance
Idea: Consider the following scenarios when computing the edit
distance between X and Y and we are comparing the final characters
of the strings, X [n] and Y [m]:
Case 1 (X [n] == Y [m]): This means the last characters match, so no
additional distance to add to total edit distance so far.
• Hence, edit distance M[n,m] = edit distance of matching the
substrings X [1 . . . n − 1] and Y [1 . . .m − 1].
• Or, M[n,m] = M[n − 1,m − 1].
• E.g., matching strings
X = a b c
Y = u v c
(RMIT University) Dynamic Programming Lecture 8 20 / 100
Edit Distance
Idea: Consider the following scenarios when computing the edit
distance between X and Y and we are comparing the final characters
of the strings, X [n] and Y [m]:
Case 1 (X [n] == Y [m]): This means the last characters match, so no
additional distance to add to total edit distance so far.
• Hence, edit distance M[n,m] = edit distance of matching the
substrings X [1 . . . n − 1] and Y [1 . . .m − 1].
• Or, M[n,m] = M[n − 1,m − 1].
• E.g., matching strings
X = a b c
Y = u v c
(RMIT University) Dynamic Programming Lecture 8 20 / 100
Edit Distance
Case 2 (X [n]! = Y [m]): This means the last characters mismatch, and
we need to apply one of the edit operations. Which one to choose, and
which calculated edit distance to reuse?
• Delete X [n] from X , at the cost of 1 unit to total edit distance.
Hence M[n,m] = 1 + M[n − 1,m].
X = a b d
Y = a b
• Insert Y [n] into X . Hence M[n,m] = 1 + M[n,m − 1].
X = a b
Y = a b d
• Substitute Y [m] for X [n], so the characters now match. Hence
M[n,m] = 1 + M[n − 1,m − 1].
X = a b d
Y = a b e
(RMIT University) Dynamic Programming Lecture 8 21 / 100
Edit Distance
Case 2 (X [n]! = Y [m]): This means the last characters mismatch, and
we need to apply one of the edit operations. Which one to choose, and
which calculated edit distance to reuse?
• Delete X [n] from X , at the cost of 1 unit to total edit distance.
Hence M[n,m] = 1 + M[n − 1,m].
X = a b d
Y = a b
• Insert Y [n] into X . Hence M[n,m] = 1 + M[n,m − 1].
X = a b
Y = a b d
• Substitute Y [m] for X [n], so the characters now match. Hence
M[n,m] = 1 + M[n − 1,m − 1].
X = a b d
Y = a b e
(RMIT University) Dynamic Programming Lecture 8 21 / 100
Edit Distance
Case 2 (X [n]! = Y [m]): This means the last characters mismatch, and
we need to apply one of the edit operations. Which one to choose, and
which calculated edit distance to reuse?
• Delete X [n] from X , at the cost of 1 unit to total edit distance.
Hence M[n,m] = 1 + M[n − 1,m].
X = a b d
Y = a b
• Insert Y [n] into X . Hence M[n,m] = 1 + M[n,m − 1].
X = a b
Y = a b d
• Substitute Y [m] for X [n], so the characters now match. Hence
M[n,m] = 1 + M[n − 1,m − 1].
X = a b d
Y = a b e
(RMIT University) Dynamic Programming Lecture 8 21 / 100
Edit Distance
Recall M[n,m] represents the edit distance (minimum number of edit
operations) needed to match strings X [1 . . . n] and Y [1 . . .m].
Computing the Edit Distance
M[n,m] =
M[n − 1,m − 1] if X [n] = Y [m]
1 + min (M[n − 1,m − 1],M[n − 1,m],M[n,m − 1])
otherwise
M[n,0] = n,M[0,m] = m
Recursive definition! Similar to row-coin problem, we build a dynamic
programming table/matrix for M[n,m].
I.e., To compute M[n,m], a table M[1 . . . n,1 . . .m] is computed and
filled.
(RMIT University) Dynamic Programming Lecture 8 22 / 100
Edit Distance
Recall M[n,m] represents the edit distance (minimum number of edit
operations) needed to match strings X [1 . . . n] and Y [1 . . .m].
Computing the Edit Distance
M[n,m] =
M[n − 1,m − 1] if X [n] = Y [m]
1 + min (M[n − 1,m − 1],M[n − 1,m],M[n,m − 1])
otherwise
M[n,0] = n,M[0,m] = m
Recursive definition! Similar to row-coin problem, we build a dynamic
programming table/matrix for M[n,m].
I.e., To compute M[n,m], a table M[1 . . . n,1 . . .m] is computed and
filled.
(RMIT University) Dynamic Programming Lecture 8 22 / 100
Edit Distance
Recall M[n,m] represents the edit distance (minimum number of edit
operations) needed to match strings X [1 . . . n] and Y [1 . . .m].
Computing the Edit Distance
M[n,m] =
M[n − 1,m − 1] if X [n] = Y [m]
1 + min (M[n − 1,m − 1],M[n − 1,m],M[n,m − 1])
otherwise
M[n,0] = n,M[0,m] = m
Recursive definition! Similar to row-coin problem, we build a dynamic
programming table/matrix for M[n,m].
I.e., To compute M[n,m], a table M[1 . . . n,1 . . .m] is computed and
filled.
(RMIT University) Dynamic Programming Lecture 8 22 / 100
Edit Distance
Recall M[n,m] represents the edit distance (minimum number of edit
operations) needed to match strings X [1 . . . n] and Y [1 . . .m].
Computing the Edit Distance
M[n,m] =
M[n − 1,m − 1] if X [n] = Y [m]
1 + min (M[n − 1,m − 1],M[n − 1,m],M[n,m − 1])
otherwise
M[n,0] = n,M[0,m] = m
Recursive definition! Similar to row-coin problem, we build a dynamic
programming table/matrix for M[n,m].
I.e., To compute M[n,m], a table M[1 . . . n,1 . . .m] is computed and
filled.
(RMIT University) Dynamic Programming Lecture 8 22 / 100
Edit Distance – Example
a n n e a l i n g
0 1 2 3 4 5 6 7 8 9
a 1
n 2
n 3
u 4
a 5
l 6
Compute Ed(annual, annealing).
(RMIT University) Dynamic Programming Lecture 8 23 / 100
Edit Distance – Example
a n n e a l i n g
0 1 2 3 4 5 6 7 8 9
a 1 0
n 2
n 3
u 4
a 5
l 6
Compute Ed(annual, annealing).
(RMIT University) Dynamic Programming Lecture 8 24 / 100
Edit Distance – Example
a n n e a l i n g
0 1 2 3 4 5 6 7 8 9
a 1 0 1
n 2
n 3
u 4
a 5
l 6
Compute Ed(annual, annealing).
(RMIT University) Dynamic Programming Lecture 8 25 / 100
Edit Distance – Example
a n n e a l i n g
0 1 2 3 4 5 6 7 8 9
a 1 0 1 2
n 2
n 3
u 4
a 5
l 6
Compute Ed(annual, annealing).
(RMIT University) Dynamic Programming Lecture 8 26 / 100
Edit Distance – Example
a n n e a l i n g
0 1 2 3 4 5 6 7 8 9
a 1 0 1 2 3
n 2
n 3
u 4
a 5
l 6
Compute Ed(annual, annealing).
(RMIT University) Dynamic Programming Lecture 8 27 / 100
Edit Distance – Example
a n n e a l i n g
0 1 2 3 4 5 6 7 8 9
a 1 0 1 2 3 4
n 2
n 3
u 4
a 5
l 6
Compute Ed(annual, annealing).
(RMIT University) Dynamic Programming Lecture 8 28 / 100
Edit Distance – Example
a n n e a l i n g
0 1 2 3 4 5 6 7 8 9
a 1 0 1 2 3 4 5
n 2
n 3
u 4
a 5
l 6
Compute Ed(annual, annealing).
(RMIT University) Dynamic Programming Lecture 8 29 / 100
Edit Distance – Example
a n n e a l i n g
0 1 2 3 4 5 6 7 8 9
a 1 0 1 2 3 4 5 6
n 2
n 3
u 4
a 5
l 6
Compute Ed(annual, annealing).
(RMIT University) Dynamic Programming Lecture 8 30 / 100
Edit Distance – Example
a n n e a l i n g
0 1 2 3 4 5 6 7 8 9
a 1 0 1 2 3 4 5 6 7
n 2
n 3
u 4
a 5
l 6
Compute Ed(annual, annealing).
(RMIT University) Dynamic Programming Lecture 8 31 / 100
Edit Distance – Example
a n n e a l i n g
0 1 2 3 4 5 6 7 8 9
a 1 0 1 2 3 4 5 6 7 8
n 2
n 3
u 4
a 5
l 6
Compute Ed(annual, annealing).
(RMIT University) Dynamic Programming Lecture 8 32 / 100
Edit Distance – Example
a n n e a l i n g
0 1 2 3 4 5 6 7 8 9
a 1 0 1 2 3 4 5 6 7 8
n 2 1
n 3
u 4
a 5
l 6
Compute Ed(annual, annealing).
(RMIT University) Dynamic Programming Lecture 8 33 / 100
Edit Distance – Example
a n n e a l i n g
0 1 2 3 4 5 6 7 8 9
a 1 0 1 2 3 4 5 6 7 8
n 2 1 0
n 3
u 4
a 5
l 6
Compute Ed(annual, annealing).
(RMIT University) Dynamic Programming Lecture 8 34 / 100
Edit Distance – Example
a n n e a l i n g
0 1 2 3 4 5 6 7 8 9
a 1 0 1 2 3 4 5 6 7 8
n 2 1 0 1
n 3
u 4
a 5
l 6
Compute Ed(annual, annealing).
(RMIT University) Dynamic Programming Lecture 8 35 / 100
Edit Distance – Example
a n n e a l i n g
0 1 2 3 4 5 6 7 8 9
a 1 0 1 2 3 4 5 6 7 8
n 2 1 0 1 2
n 3
u 4
a 5
l 6
Compute Ed(annual, annealing).
(RMIT University) Dynamic Programming Lecture 8 36 / 100
Edit Distance – Example
a n n e a l i n g
0 1 2 3 4 5 6 7 8 9
a 1 0 1 2 3 4 5 6 7 8
n 2 1 0 1 2 3
n 3
u 4
a 5
l 6
Compute Ed(annual, annealing).
(RMIT University) Dynamic Programming Lecture 8 37 / 100
Edit Distance – Example
And so on …
(RMIT University) Dynamic Programming Lecture 8 38 / 100
Edit Distance – Example
a n n e a l i n g
0 1 2 3 4 5 6 7 8 9
a 1 0 1 2 3 4 5 6 7 8
n 2 1 0 1 2 3 4 5 6 7
n 3 2 1 0 1 2 3 4 5 6
u 4 3 2 1 1 2 3 4 5 6
a 5 4 3 2 2 1 2 3 4 5
l 6 5 4 3 3 2 1 2 3 4
Compute Ed(annual, annealing).
(RMIT University) Dynamic Programming Lecture 8 39 / 100
Edit Distance – Backtrace
• In addition to minimum edit distance, we would like to know what
is the sequence of operations (insertion, deletion, substitution,
match) to obtain this edit distance and how the strings align.
• Use backtrace
• From (m,n) cell (bottom right corner of table), evaluate which
operation and adjacent cell we used to arrive to (m,n). Repeat this
process until traverse to (0,0) cell.
• This produces a path from (0,0) to (m,n) that is non-decreasing in
edit distance.
• May not be unique (i.e., several alignments/sequence of
operations) lead to same edit distance. This means multiple
backtraces.
(RMIT University) Dynamic Programming Lecture 8 40 / 100
Edit Distance – Backtrace
• In addition to minimum edit distance, we would like to know what
is the sequence of operations (insertion, deletion, substitution,
match) to obtain this edit distance and how the strings align.
• Use backtrace
• From (m,n) cell (bottom right corner of table), evaluate which
operation and adjacent cell we used to arrive to (m,n). Repeat this
process until traverse to (0,0) cell.
• This produces a path from (0,0) to (m,n) that is non-decreasing in
edit distance.
• May not be unique (i.e., several alignments/sequence of
operations) lead to same edit distance. This means multiple
backtraces.
(RMIT University) Dynamic Programming Lecture 8 40 / 100
Edit Distance – Backtrace
• In addition to minimum edit distance, we would like to know what
is the sequence of operations (insertion, deletion, substitution,
match) to obtain this edit distance and how the strings align.
• Use backtrace
• From (m,n) cell (bottom right corner of table), evaluate which
operation and adjacent cell we used to arrive to (m,n). Repeat this
process until traverse to (0,0) cell.
• This produces a path from (0,0) to (m,n) that is non-decreasing in
edit distance.
• May not be unique (i.e., several alignments/sequence of
operations) lead to same edit distance. This means multiple
backtraces.
(RMIT University) Dynamic Programming Lecture 8 40 / 100
Edit Distance – Backtrace
• In addition to minimum edit distance, we would like to know what
is the sequence of operations (insertion, deletion, substitution,
match) to obtain this edit distance and how the strings align.
• Use backtrace
• From (m,n) cell (bottom right corner of table), evaluate which
operation and adjacent cell we used to arrive to (m,n). Repeat this
process until traverse to (0,0) cell.
• This produces a path from (0,0) to (m,n) that is non-decreasing in
edit distance.
• May not be unique (i.e., several alignments/sequence of
operations) lead to same edit distance. This means multiple
backtraces.
(RMIT University) Dynamic Programming Lecture 8 40 / 100
Edit Distance – Backtrace
a n n e a l i n g
0 1 2 3 4 5 6 7 8 9
a 1 0 1 2 3 4 5 6 7 8
n 2 1 0 1 2 3 4 5 6 7
n 3 2 1 0 1 2 3 4 5 6
u 4 3 2 1 1 2 3 4 5 6
a 5 4 3 2 2 1 2 3 4 5
l 6 5 4 3 3 2 1 2 3 4
MMMSMMIII
‘M’ – Match, ‘S’ – Substitution, ‘I’ – Insertion, ‘D’ – Deletion
Compute Ed(annual, annealing).
(RMIT University) Dynamic Programming Lecture 8 41 / 100
Edit Distance
• The time complexity for table construction is is Θ(nm) where
|S1| = n and |S2| = m. Backtrace’s worst-case complexity is
Θ(m + n) (zigzag).
The solution also uses Θ(nm) space.
• DEMO: http://www.let.rug.nl/kleiweg/lev/
• The canonical reference on Edit Distance is: D. Gusfield.
Algorithms on Strings, Trees, and Sequences. Cambridge
University Press, , , USA, 1997.
(RMIT University) Dynamic Programming Lecture 8 42 / 100
http://www.let.rug.nl/kleiweg/lev/
Edit Distance
• The time complexity for table construction is is Θ(nm) where
|S1| = n and |S2| = m. Backtrace’s worst-case complexity is
Θ(m + n) (zigzag).
The solution also uses Θ(nm) space.
• DEMO: http://www.let.rug.nl/kleiweg/lev/
• The canonical reference on Edit Distance is: D. Gusfield.
Algorithms on Strings, Trees, and Sequences. Cambridge
University Press, , , USA, 1997.
(RMIT University) Dynamic Programming Lecture 8 42 / 100
http://www.let.rug.nl/kleiweg/lev/
Overview
1 Overview
2 Edit Distance
3 Knapsack Problem
4 Warshall’s Algorithm
5 Summary
(RMIT University) Dynamic Programming Lecture 8 43 / 100
Knapsack Problem
Knapsack Problem
Given n items of known weights w1, . . . , wn and the values v1, . . . , vn
and a knapsack of capacity W , find the most valuable subset of the
items that fit into the knapsack.
• Recall that the exact solution for all instances of this problem has
been proven to be Ω(2n).
• We can solve the problem using dynamic programming in
“pseudo-polynomial” time.
(RMIT University) Dynamic Programming Lecture 8 44 / 100
DP Knapsack Problem – Sketch
• Consider an instance of the knapsack problem defined by the first
i items, 1 ≤ i ≤ n, with weights w1 . . . wi, values v1 . . . vi, and
capacity j, 1 ≤ j ≤W .
• Let V [i, j] be an optimal value to the subproblem instance of
having the first i items and a knapsack capacity of j .
(RMIT University) Dynamic Programming Lecture 8 45 / 100
DP Knapsack Problem – Sketch
• Consider an instance of the knapsack problem defined by the first
i items, 1 ≤ i ≤ n, with weights w1 . . . wi, values v1 . . . vi, and
capacity j, 1 ≤ j ≤W .
• Let V [i, j] be an optimal value to the subproblem instance of
having the first i items and a knapsack capacity of j .
(RMIT University) Dynamic Programming Lecture 8 45 / 100
DP Knapsack Problem – Sketch
• Consider we are solving a (sub)instance of the knapsack problem
V [i, j]
• We can divide all the subsets of the first i items that fit into the
knapsack of capacity j into two categories:
• Among the subsets that do not include the i·th item, the value of the
optimal subset is, by definition, V [i− 1, j]
• Among the subsets that do include the i·th item (j − wi ≥ 0), an
optimal subset is made up of this item and an optimal subset of the
first i− 1 items that fit into the knapsack of capacity j − wi. The
value of such an optimal subset is vi + V [i− 1, j − wi].
• Whether we choose to include i·th item dependent on whether the
i·th item can fit into knapsack and if so, which option leads to
larger value (V [i , j]).
(RMIT University) Dynamic Programming Lecture 8 46 / 100
DP Knapsack Problem – Sketch
• Consider we are solving a (sub)instance of the knapsack problem
V [i, j]
• We can divide all the subsets of the first i items that fit into the
knapsack of capacity j into two categories:
• Among the subsets that do not include the i·th item, the value of the
optimal subset is, by definition, V [i− 1, j]
• Among the subsets that do include the i·th item (j − wi ≥ 0), an
optimal subset is made up of this item and an optimal subset of the
first i− 1 items that fit into the knapsack of capacity j − wi. The
value of such an optimal subset is vi + V [i− 1, j − wi].
• Whether we choose to include i·th item dependent on whether the
i·th item can fit into knapsack and if so, which option leads to
larger value (V [i , j]).
(RMIT University) Dynamic Programming Lecture 8 46 / 100
DP Knapsack Problem – Sketch
• Consider we are solving a (sub)instance of the knapsack problem
V [i, j]
• We can divide all the subsets of the first i items that fit into the
knapsack of capacity j into two categories:
• Among the subsets that do not include the i·th item, the value of the
optimal subset is, by definition, V [i− 1, j]
• Among the subsets that do include the i·th item (j − wi ≥ 0), an
optimal subset is made up of this item and an optimal subset of the
first i− 1 items that fit into the knapsack of capacity j − wi. The
value of such an optimal subset is vi + V [i− 1, j − wi].
• Whether we choose to include i·th item dependent on whether the
i·th item can fit into knapsack and if so, which option leads to
larger value (V [i , j]).
(RMIT University) Dynamic Programming Lecture 8 46 / 100
DP Knapsack Problem – Sketch
• Consider we are solving a (sub)instance of the knapsack problem
V [i, j]
• We can divide all the subsets of the first i items that fit into the
knapsack of capacity j into two categories:
• Among the subsets that do not include the i·th item, the value of the
optimal subset is, by definition, V [i− 1, j]
• Among the subsets that do include the i·th item (j − wi ≥ 0), an
optimal subset is made up of this item and an optimal subset of the
first i− 1 items that fit into the knapsack of capacity j − wi. The
value of such an optimal subset is vi + V [i− 1, j − wi].
• Whether we choose to include i·th item dependent on whether the
i·th item can fit into knapsack and if so, which option leads to
larger value (V [i , j]).
(RMIT University) Dynamic Programming Lecture 8 46 / 100
DP Knapsack Problem – Sketch
This leads to the following recursion:
V [i, j] =
{
max(V [i− 1, j], vi + V [i− 1, j − wi]) if j − wi ≥ 0,
V [i− 1, j] if j − wi < 0.
V [0, j] = 0 for j ≥ 0 and V [i,0] = 0 for i ≥ 0.
(RMIT University) Dynamic Programming Lecture 8 47 / 100
Bottom-Up DP algorithm
Bottom-up Dynamic Programming: What we have been doing up to
this point, computing solutions to all entries in the dynamic
programming table.
For example the dynamic programming solution to the coin row
problem and calculating the edit distance.
Given the following problem, how do we solve it using a Bottom-Up
Dynamic Programming algorithm?
Knapsack capacity
W = 6.
i 1 2 3 4 5
weight(wi) 3 2 1 4 5
value(vi) $25 $20 $15 $40 $50
(RMIT University) Dynamic Programming Lecture 8 48 / 100
Bottom-Up DP algorithm
Bottom-up Dynamic Programming: What we have been doing up to
this point, computing solutions to all entries in the dynamic
programming table.
For example the dynamic programming solution to the coin row
problem and calculating the edit distance.
Given the following problem, how do we solve it using a Bottom-Up
Dynamic Programming algorithm?
Knapsack capacity
W = 6.
i 1 2 3 4 5
weight(wi) 3 2 1 4 5
value(vi) $25 $20 $15 $40 $50
(RMIT University) Dynamic Programming Lecture 8 48 / 100
Bottom-Up DP algorithm
Bottom-up Dynamic Programming: What we have been doing up to
this point, computing solutions to all entries in the dynamic
programming table.
For example the dynamic programming solution to the coin row
problem and calculating the edit distance.
Given the following problem, how do we solve it using a Bottom-Up
Dynamic Programming algorithm?
Knapsack capacity
W = 6.
i 1 2 3 4 5
weight(wi) 3 2 1 4 5
value(vi) $25 $20 $15 $40 $50
(RMIT University) Dynamic Programming Lecture 8 48 / 100
Bottom-Up DP algorithm
We record the solutions to each smaller problems in table.
↓ i W → 0 1 2 3 4 5 6
0
1
2
3
?
4
5 GOAL
V [3,4] = ? stores the optimal value for a knapsack with capacity
4 of the first 3 items
(RMIT University) Dynamic Programming Lecture 8 49 / 100
Bottom-Up DP algorithm
We record the solutions to each smaller problems in table.
↓ i W → 0 1 2 3 4 5 6
0
1
2
3 ?
4
5 GOAL
V [3,4] = ? stores the optimal value for a knapsack with capacity
4 of the first 3 items
(RMIT University) Dynamic Programming Lecture 8 49 / 100
Bottom-Up DP algorithm
V [i, j] =
{
max(V [i− 1, j], vi + V [i− 1, j − wi]) if j − wi ≥ 0,
V [i− 1, j] if j − wi < 0.
V [0, j] = 0 for j ≥ 0 and V [i,0] = 0 for i ≥ 0.
↓ i W → 0 1 2 3 4 5 6
0
0 0 0 0 0 0 0
w1 = 3 v1 = 25 1
0 0 0 25 25
w2 = 2 v2 = 20 2
0 0 20 25 25
w3 = 1 v3 = 15 3
0 15 20
w4 = 4 v4 = 40 4
0 15 20
w5 = 5 v5 = 50 5
0 15 20
(RMIT University) Dynamic Programming Lecture 8 50 / 100
Bottom-Up DP algorithm
V [i, j] =
{
max(V [i− 1, j], vi + V [i− 1, j − wi]) if j − wi ≥ 0,
V [i− 1, j] if j − wi < 0.
V [0, j] = 0 for j ≥ 0 and V [i,0] = 0 for i ≥ 0.
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0
0 0 25 25
w2 = 2 v2 = 20 2 0
0 20 25 25
w3 = 1 v3 = 15 3 0
15 20
w4 = 4 v4 = 40 4 0
15 20
w5 = 5 v5 = 50 5 0
15 20
(RMIT University) Dynamic Programming Lecture 8 50 / 100
Bottom-Up DP algorithm
V [i, j] =
{
max(V [i− 1, j], vi + V [i− 1, j − wi]) if j − wi ≥ 0,
V [i− 1, j] if j − wi < 0.
V [0, j] = 0 for j ≥ 0 and V [i,0] = 0 for i ≥ 0.
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0
0 25 25
w2 = 2 v2 = 20 2 0
0 20 25 25
w3 = 1 v3 = 15 3 0
15 20
w4 = 4 v4 = 40 4 0
15 20
w5 = 5 v5 = 50 5 0
15 20
(RMIT University) Dynamic Programming Lecture 8 50 / 100
Bottom-Up DP algorithm
V [i, j] =
{
max(V [i− 1, j], vi + V [i− 1, j − wi]) if j − wi ≥ 0,
V [i− 1, j] if j − wi < 0.
V [0, j] = 0 for j ≥ 0 and V [i,0] = 0 for i ≥ 0.
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0
0 25 25
w2 = 2 v2 = 20 2 0 0
20 25 25
w3 = 1 v3 = 15 3 0
15 20
w4 = 4 v4 = 40 4 0
15 20
w5 = 5 v5 = 50 5 0
15 20
(RMIT University) Dynamic Programming Lecture 8 50 / 100
Bottom-Up DP algorithm
V [i, j] =
{
max(V [i− 1, j], vi + V [i− 1, j − wi]) if j − wi ≥ 0,
V [i− 1, j] if j − wi < 0.
V [0, j] = 0 for j ≥ 0 and V [i,0] = 0 for i ≥ 0.
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0
0 25 25
w2 = 2 v2 = 20 2 0 0
20 25 25
w3 = 1 v3 = 15 3 0 15
20
w4 = 4 v4 = 40 4 0
15 20
w5 = 5 v5 = 50 5 0
15 20
(RMIT University) Dynamic Programming Lecture 8 50 / 100
Bottom-Up DP algorithm
V [i, j] =
{
max(V [i− 1, j], vi + V [i− 1, j − wi]) if j − wi ≥ 0,
V [i− 1, j] if j − wi < 0.
V [0, j] = 0 for j ≥ 0 and V [i,0] = 0 for i ≥ 0.
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0
0 25 25
w2 = 2 v2 = 20 2 0 0
20 25 25
w3 = 1 v3 = 15 3 0 15
20
w4 = 4 v4 = 40 4 0 15
20
w5 = 5 v5 = 50 5 0 15
20
(RMIT University) Dynamic Programming Lecture 8 50 / 100
Bottom-Up DP algorithm
V [i, j] =
{
max(V [i− 1, j], vi + V [i− 1, j − wi]) if j − wi ≥ 0,
V [i− 1, j] if j − wi < 0.
V [0, j] = 0 for j ≥ 0 and V [i,0] = 0 for i ≥ 0.
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0 0
25 25
w2 = 2 v2 = 20 2 0 0
20 25 25
w3 = 1 v3 = 15 3 0 15
20
w4 = 4 v4 = 40 4 0 15
20
w5 = 5 v5 = 50 5 0 15
20
(RMIT University) Dynamic Programming Lecture 8 50 / 100
Bottom-Up DP algorithm
V [i, j] =
{
max(V [i− 1, j], vi + V [i− 1, j − wi]) if j − wi ≥ 0,
V [i− 1, j] if j − wi < 0.
V [0, j] = 0 for j ≥ 0 and V [i,0] = 0 for i ≥ 0.
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0 0
25 25
w2 = 2 v2 = 20 2 0 0 20
25 25
w3 = 1 v3 = 15 3 0 15
20
w4 = 4 v4 = 40 4 0 15
20
w5 = 5 v5 = 50 5 0 15
20
(RMIT University) Dynamic Programming Lecture 8 50 / 100
Bottom-Up DP algorithm
V [i, j] =
{
max(V [i− 1, j], vi + V [i− 1, j − wi]) if j − wi ≥ 0,
V [i− 1, j] if j − wi < 0.
V [0, j] = 0 for j ≥ 0 and V [i,0] = 0 for i ≥ 0.
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0 0
25 25
w2 = 2 v2 = 20 2 0 0 20
25 25
w3 = 1 v3 = 15 3 0 15 20
w4 = 4 v4 = 40 4 0 15 20
w5 = 5 v5 = 50 5 0 15 20
(RMIT University) Dynamic Programming Lecture 8 50 / 100
Bottom-Up DP algorithm
V [i, j] =
{
max(V [i− 1, j], vi + V [i− 1, j − wi]) if j − wi ≥ 0,
V [i− 1, j] if j − wi < 0.
V [0, j] = 0 for j ≥ 0 and V [i,0] = 0 for i ≥ 0.
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0 0 25
25
w2 = 2 v2 = 20 2 0 0 20
25 25
w3 = 1 v3 = 15 3 0 15 20
w4 = 4 v4 = 40 4 0 15 20
w5 = 5 v5 = 50 5 0 15 20
(RMIT University) Dynamic Programming Lecture 8 50 / 100
Bottom-Up DP algorithm
V [i, j] =
{
max(V [i− 1, j], vi + V [i− 1, j − wi]) if j − wi ≥ 0,
V [i− 1, j] if j − wi < 0.
V [0, j] = 0 for j ≥ 0 and V [i,0] = 0 for i ≥ 0.
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0 0 25
25
w2 = 2 v2 = 20 2 0 0 20 25
25
w3 = 1 v3 = 15 3 0 15 20
w4 = 4 v4 = 40 4 0 15 20
w5 = 5 v5 = 50 5 0 15 20
(RMIT University) Dynamic Programming Lecture 8 50 / 100
Bottom-Up DP algorithm
V [i, j] =
{
max(V [i− 1, j], vi + V [i− 1, j − wi]) if j − wi ≥ 0,
V [i− 1, j] if j − wi < 0.
V [0, j] = 0 for j ≥ 0 and V [i,0] = 0 for i ≥ 0.
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0 0 25
25
w2 = 2 v2 = 20 2 0 0 20 25
25
w3 = 1 v3 = 15 3 0 15 20 ?
w4 = 4 v4 = 40 4 0 15 20
w5 = 5 v5 = 50 5 0 15 20
(RMIT University) Dynamic Programming Lecture 8 50 / 100
Bottom-Up DP algorithm
V [i, j] =
{
max(V [i− 1, j], vi + V [i− 1, j − wi]) if j − wi ≥ 0,
V [i− 1, j] if j − wi < 0.
V [0, j] = 0 for j ≥ 0 and V [i,0] = 0 for i ≥ 0.
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0 0 25
25
w2 = 2 v2 = 20 2 0 0 20 25
25
w3 = 1 v3 = 15 3 0 15 20 ?
w4 = 4 v4 = 40 4 0 15 20
w5 = 5 v5 = 50 5 0 15 20
(RMIT University) Dynamic Programming Lecture 8 50 / 100
Bottom-Up DP algorithm
V [i, j] =
{
max(V [i− 1, j], vi + V [i− 1, j − wi]) if j − wi ≥ 0,
V [i− 1, j] if j − wi < 0.
V [0, j] = 0 for j ≥ 0 and V [i,0] = 0 for i ≥ 0.
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0 0 25
25
w2 = 2 v2 = 20 2 0 0 20 25
25
w3 = 1 v3 = 15 3 0 15 20
w4 = 4 v4 = 40 4 0 15 20
w5 = 5 v5 = 50 5 0 15 20
(RMIT University) Dynamic Programming Lecture 8 50 / 100
Bottom-Up DP algorithm
V [i, j] =
{
max(V [i− 1, j], vi + V [i− 1, j − wi]) if j − wi ≥ 0,
V [i− 1, j] if j − wi < 0.
V [0, j] = 0 for j ≥ 0 and V [i,0] = 0 for i ≥ 0.
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0 0 25
25
w2 = 2 v2 = 20 2 0 0 20 25
25
w3 = 1 v3 = 15 3 0 15 20 35
w4 = 4 v4 = 40 4 0 15 20
w5 = 5 v5 = 50 5 0 15 20
(RMIT University) Dynamic Programming Lecture 8 50 / 100
Bottom-Up DP algorithm
V [i, j] =
{
max(V [i− 1, j], vi + V [i− 1, j − wi]) if j − wi ≥ 0,
V [i− 1, j] if j − wi < 0.
V [0, j] = 0 for j ≥ 0 and V [i,0] = 0 for i ≥ 0.
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0 0 25
25
w2 = 2 v2 = 20 2 0 0 20 25
25
w3 = 1 v3 = 15 3 0 15 20 35
w4 = 4 v4 = 40 4 0 15 20 35
w5 = 5 v5 = 50 5 0 15 20 35
(RMIT University) Dynamic Programming Lecture 8 50 / 100
Bottom-Up DP algorithm
V [i, j] =
{
max(V [i− 1, j], vi + V [i− 1, j − wi]) if j − wi ≥ 0,
V [i− 1, j] if j − wi < 0.
V [0, j] = 0 for j ≥ 0 and V [i,0] = 0 for i ≥ 0.
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0 0 25 25
w2 = 2 v2 = 20 2 0 0 20 25 25
w3 = 1 v3 = 15 3 0 15 20 35
w4 = 4 v4 = 40 4 0 15 20 35
w5 = 5 v5 = 50 5 0 15 20 35
(RMIT University) Dynamic Programming Lecture 8 50 / 100
Bottom-Up DP algorithm
V [i, j] =
{
max(V [i− 1, j], vi + V [i− 1, j − wi]) if j − wi ≥ 0,
V [i− 1, j] if j − wi < 0.
V [0, j] = 0 for j ≥ 0 and V [i,0] = 0 for i ≥ 0.
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0 0 25 25
w2 = 2 v2 = 20 2 0 0 20 25 25
w3 = 1 v3 = 15 3 0 15 20 35 ?
w4 = 4 v4 = 40 4 0 15 20 35
w5 = 5 v5 = 50 5 0 15 20 35
(RMIT University) Dynamic Programming Lecture 8 50 / 100
Bottom-Up DP algorithm
V [i, j] =
{
max(V [i− 1, j], vi + V [i− 1, j − wi]) if j − wi ≥ 0,
V [i− 1, j] if j − wi < 0.
V [0, j] = 0 for j ≥ 0 and V [i,0] = 0 for i ≥ 0.
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0 0 25 25
w2 = 2 v2 = 20 2 0 0 20 25 25
w3 = 1 v3 = 15 3 0 15 20 35 ?
w4 = 4 v4 = 40 4 0 15 20 35
w5 = 5 v5 = 50 5 0 15 20 35
(RMIT University) Dynamic Programming Lecture 8 50 / 100
Bottom-Up DP algorithm
V [i, j] =
{
max(V [i− 1, j], vi + V [i− 1, j − wi]) if j − wi ≥ 0,
V [i− 1, j] if j − wi < 0.
V [0, j] = 0 for j ≥ 0 and V [i,0] = 0 for i ≥ 0.
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0 0 25 25
w2 = 2 v2 = 20 2 0 0 20 25 25
w3 = 1 v3 = 15 3 0 15 20 35
w4 = 4 v4 = 40 4 0 15 20 35
w5 = 5 v5 = 50 5 0 15 20 35
(RMIT University) Dynamic Programming Lecture 8 50 / 100
Bottom-Up DP algorithm
V [i, j] =
{
max(V [i− 1, j], vi + V [i− 1, j − wi]) if j − wi ≥ 0,
V [i− 1, j] if j − wi < 0.
V [0, j] = 0 for j ≥ 0 and V [i,0] = 0 for i ≥ 0.
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0 0 25 25
w2 = 2 v2 = 20 2 0 0 20 25 25
w3 = 1 v3 = 15 3 0 15 20 35 40
w4 = 4 v4 = 40 4 0 15 20 35
w5 = 5 v5 = 50 5 0 15 20 35
(RMIT University) Dynamic Programming Lecture 8 50 / 100
Bottom-Up DP algorithm
V [i, j] =
{
max(V [i− 1, j], vi + V [i− 1, j − wi]) if j − wi ≥ 0,
V [i− 1, j] if j − wi < 0.
V [0, j] = 0 for j ≥ 0 and V [i,0] = 0 for i ≥ 0.
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0 0 25 25
w2 = 2 v2 = 20 2 0 0 20 25 25
w3 = 1 v3 = 15 3 0 15 20 35 40
w4 = 4 v4 = 40 4 0 15 20 35 40
w5 = 5 v5 = 50 5 0 15 20 35 40
(RMIT University) Dynamic Programming Lecture 8 50 / 100
Bottom-Up DP algorithm
V [i, j] =
{
max(V [i− 1, j], vi + V [i− 1, j − wi]) if j − wi ≥ 0,
V [i− 1, j] if j − wi < 0.
V [0, j] = 0 for j ≥ 0 and V [i,0] = 0 for i ≥ 0.
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0 0 25 25 25
w2 = 2 v2 = 20 2 0 0 20 25 25 45
w3 = 1 v3 = 15 3 0 15 20 35 40 45
w4 = 4 v4 = 40 4 0 15 20 35 40
w5 = 5 v5 = 50 5 0 15 20 35 40
(RMIT University) Dynamic Programming Lecture 8 50 / 100
Bottom-Up DP algorithm
V [i, j] =
{
max(V [i− 1, j], vi + V [i− 1, j − wi]) if j − wi ≥ 0,
V [i− 1, j] if j − wi < 0.
V [0, j] = 0 for j ≥ 0 and V [i,0] = 0 for i ≥ 0.
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0 0 25 25 25
w2 = 2 v2 = 20 2 0 0 20 25 25 45
w3 = 1 v3 = 15 3 0 15 20 35 40 45
w4 = 4 v4 = 40 4 0 15 20 35 40 55
w5 = 5 v5 = 50 5 0 15 20 35 40
(RMIT University) Dynamic Programming Lecture 8 50 / 100
Bottom-Up DP algorithm
V [i, j] =
{
max(V [i− 1, j], vi + V [i− 1, j − wi]) if j − wi ≥ 0,
V [i− 1, j] if j − wi < 0.
V [0, j] = 0 for j ≥ 0 and V [i,0] = 0 for i ≥ 0.
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0 0 25 25 25
w2 = 2 v2 = 20 2 0 0 20 25 25 45
w3 = 1 v3 = 15 3 0 15 20 35 40 45
w4 = 4 v4 = 40 4 0 15 20 35 40 55
w5 = 5 v5 = 50 5 0 15 20 35 40
(RMIT University) Dynamic Programming Lecture 8 50 / 100
Bottom-Up DP algorithm
V [i, j] =
{
max(V [i− 1, j], vi + V [i− 1, j − wi]) if j − wi ≥ 0,
V [i− 1, j] if j − wi < 0.
V [0, j] = 0 for j ≥ 0 and V [i,0] = 0 for i ≥ 0.
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0 0 25 25 25
w2 = 2 v2 = 20 2 0 0 20 25 25 45
w3 = 1 v3 = 15 3 0 15 20 35 40 45
w4 = 4 v4 = 40 4 0 15 20 35 40 55
w5 = 5 v5 = 50 5 0 15 20 35 40 55
(RMIT University) Dynamic Programming Lecture 8 50 / 100
Bottom-Up DP algorithm
V [i, j] =
{
max(V [i− 1, j], vi + V [i− 1, j − wi]) if j − wi ≥ 0,
V [i− 1, j] if j − wi < 0.
V [0, j] = 0 for j ≥ 0 and V [i,0] = 0 for i ≥ 0.
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0 0 25 25 25 25
w2 = 2 v2 = 20 2 0 0 20 25 25 45 45
w3 = 1 v3 = 15 3 0 15 20 35 40 45 60
w4 = 4 v4 = 40 4 0 15 20 35 40 55 60
w5 = 5 v5 = 50 5 0 15 20 35 40 55
(RMIT University) Dynamic Programming Lecture 8 50 / 100
Bottom-Up DP algorithm
V [i, j] =
{
max(V [i− 1, j], vi + V [i− 1, j − wi]) if j − wi ≥ 0,
V [i− 1, j] if j − wi < 0.
V [0, j] = 0 for j ≥ 0 and V [i,0] = 0 for i ≥ 0.
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0 0 25 25 25 25
w2 = 2 v2 = 20 2 0 0 20 25 25 45 45
w3 = 1 v3 = 15 3 0 15 20 35 40 45 60
w4 = 4 v4 = 40 4 0 15 20 35 40 55 60
w5 = 5 v5 = 50 5 0 15 20 35 40 55 65
(RMIT University) Dynamic Programming Lecture 8 50 / 100
Bottom-Up DP algorithm – Backtrace
How to find the set of items to include? Use backtrace, similar to edit
distance and coin-row problem.
1 From V [n,W ], trace back how we arrived at this table cell – either
from V [n − 1,W ] or V [n − 1,W − wn].
2 Repeat this step until reach V [0,0].
3 Items that were included in the backtrace form the final solution for
knapsack problem.
(RMIT University) Dynamic Programming Lecture 8 51 / 100
Bottom-Up DP algorithm – Backtrace
How to find the set of items to include? Use backtrace, similar to edit
distance and coin-row problem.
1 From V [n,W ], trace back how we arrived at this table cell – either
from V [n − 1,W ] or V [n − 1,W − wn].
2 Repeat this step until reach V [0,0].
3 Items that were included in the backtrace form the final solution for
knapsack problem.
(RMIT University) Dynamic Programming Lecture 8 51 / 100
Bottom-Up DP algorithm – Backtrace
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0 0 25 25 25 25
w2 = 2 v2 = 20 2 0 0 20 25 25 45 45
w3 = 1 v3 = 15 3 0 15 20 35 40 45 60
w4 = 4 v4 = 40 4 0 15 20 35 40 55 60
w5 = 5 v5 = 50 5 0 15 20 35 40 55 65
Lets do the backtrace!
(RMIT University) Dynamic Programming Lecture 8 52 / 100
Bottom-Up DP algorithm – Backtrace
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0 0 25 25 25 25
w2 = 2 v2 = 20 2 0 0 20 25 25 45 45
w3 = 1 v3 = 15 3 0 15 20 35 40 45 60
w4 = 4 v4 = 40 4 0 15 20 35 40 55 60
w5 = 5 v5 = 50 5 0 15 20 35 40 55 65
(RMIT University) Dynamic Programming Lecture 8 53 / 100
Bottom-Up DP algorithm – Backtrace
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0 0 25 25 25 25
w2 = 2 v2 = 20 2 0 0 20 25 25 45 45
w3 = 1 v3 = 15 3 0 15 20 35 40 45 60
w4 = 4 v4 = 40 4 0 15 20 35 40 55 60
w5 = 5 v5 = 50 5 0 15 20 35 40 55 65
(RMIT University) Dynamic Programming Lecture 8 54 / 100
Bottom-Up DP algorithm – Backtrace
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0 0 25 25 25 25
w2 = 2 v2 = 20 2 0 0 20 25 25 45 45
w3 = 1 v3 = 15 3 0 15 20 35 40 45 60
w4 = 4 v4 = 40 4 0 15 20 35 40 55 60
w5 = 5 v5 = 50 5 0 15 20 35 40 55 65
(RMIT University) Dynamic Programming Lecture 8 55 / 100
Bottom-Up DP algorithm – Backtrace
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0 0 25 25 25 25
w2 = 2 v2 = 20 2 0 0 20 25 25 45 45
w3 = 1 v3 = 15 3 0 15 20 35 40 45 60
w4 = 4 v4 = 40 4 0 15 20 35 40 55 60
w5 = 5 v5 = 50 5 0 15 20 35 40 55 65
Question: In general, using the dynamic programming table, how can
we tell if there is multiple optimal solutions to a Knapsack problem?
(RMIT University) Dynamic Programming Lecture 8 56 / 100
Bottom-Up DP algorithm – Backtrace
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
w1 = 3 v1 = 25 1 0 0 0 25 25 25 25
w2 = 2 v2 = 20 2 0 0 20 25 25 45 45
w3 = 1 v3 = 15 3 0 15 20 35 40 45 60
w4 = 4 v4 = 40 4 0 15 20 35 40 55 60
w5 = 5 v5 = 50 5 0 15 20 35 40 55 65
Question: In general, using the dynamic programming table, how can
we tell if there is multiple optimal solutions to a Knapsack problem?
(RMIT University) Dynamic Programming Lecture 8 56 / 100
DP Knapsack Problem
• The complexity of constructing the dynamic table is Θ(nW ) in time
and space.
• The complexity of performing the backtrace to find the optimal
subset is Θ(n + W ).
NOTE : The running time of this algorithm is not a polynomial function
of n; rather it is a polynomial function of n and W , the largest integer
involved in defining the problem. Such algorithms are known as
pseudo-polynomial. They are efficient when the values {wi} are small,
but less practical as these values grow large.
(RMIT University) Dynamic Programming Lecture 8 57 / 100
DP Knapsack Problem – Top-Down
• Divide and conquer type of (top down) approach of solving
knapsack generally recompute many previously computed
sub-problems, hence inefficient.
• Bottom up dynamic programming approach avoids recomputation,
but can compute many unnecessary solutions to sub-problems.
• Combine space saving of divide and conquer and speed up of
bottom up approaches?
(RMIT University) Dynamic Programming Lecture 8 58 / 100
DP Knapsack Problem – Top-Down
• Divide and conquer type of (top down) approach of solving
knapsack generally recompute many previously computed
sub-problems, hence inefficient.
• Bottom up dynamic programming approach avoids recomputation,
but can compute many unnecessary solutions to sub-problems.
• Combine space saving of divide and conquer and speed up of
bottom up approaches?
(RMIT University) Dynamic Programming Lecture 8 58 / 100
DP Knapsack Problem – Top-Down
• Divide and conquer type of (top down) approach of solving
knapsack generally recompute many previously computed
sub-problems, hence inefficient.
• Bottom up dynamic programming approach avoids recomputation,
but can compute many unnecessary solutions to sub-problems.
• Combine space saving of divide and conquer and speed up of
bottom up approaches?
(RMIT University) Dynamic Programming Lecture 8 58 / 100
DP Knapsack Problem – Top-Down
ALGORITHM MFKnapsack (i, j)
/* Implement the memory function method (top-down) for the knapsack problem. */
/* INPUT : A non-negative integer i indicating the number of the first items being considered and
a non-negative integer j indicating the knapsack capacity. */
/* OUTPUT : The value of an optimal, feasible subset of the first i items. */
/* NOTE: Requires global arrays w[1 . . . n] and v[1 . . . n] of weights and values of n items, and
table F [0 . . . n, 0 . . .W ] initialized with −1s, except for row 0 and column 0 being all 0s. */
1: if F [i, j] < 0 then
2: if j < w[i] then
3: x = MFKnapsack(i− 1, j)
4: else
5: x = max( MFKnapsack(i− 1, j), v [i]+ MFKnapsack(i− 1, j − w[i]))
6: end if
7: F [i, j] = x
8: end if
9: return F [i, j]
(RMIT University) Dynamic Programming Lecture 8 59 / 100
DP Knapsack Problem – Top-Down
if F [i, j] < 0 then
if j < w[i] then
x = MFKnapsack(i− 1, j)
else
x = max( MFKnapsack(i− 1, j), v[i]+ MFKnapsack(i− 1, j − w[i]))
end if
F [i, j] = x
end if
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0
w1 = 3, v1 = 25 1 0 -1 -1 -1 -1 -1 -1
w2 = 2, v2 = 20 2 0 -1 -1 -1 -1 -1 -1
w3 = 1, v3 = 15 3 0 -1 -1 -1 -1 -1 -1
w4 = 4, v4 = 40 4 0 -1 -1 -1 -1 -1 -1
w5 = 5, v5 = 50 5 0 -1 -1 -1 -1 -1 -1
(RMIT University) Dynamic Programming Lecture 8 60 / 100
DP Knapsack Problem – Top-Down
if F [i, j] < 0 then
if j < w[i] then
x = MFKnapsack(i− 1, j)
else
x = max( MFKnapsack(i− 1, j), v[i]+ MFKnapsack(i− 1, j − w[i]))
end if
F [i, j] = x
end if
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0
w1 = 3, v1 = 25 1 0 -1 -1 -1 -1 -1 -1
w2 = 2, v2 = 20 2 0 -1 -1 -1 -1 -1 -1
w3 = 1, v3 = 15 3 0 -1 -1 -1 -1 -1 -1
w4 = 4, v4 = 40 4 0 -1 -1 -1 -1 -1 -1
w5 = 5, v5 = 50 5 0 -1 -1 -1 -1 -1 �
(RMIT University) Dynamic Programming Lecture 8 61 / 100
DP Knapsack Problem – Top-Down
if F [i, j] < 0 then
if j < w[i] then
x = MFKnapsack(i− 1, j)
else
x = max( MFKnapsack(i− 1, j), v[i]+ MFKnapsack(i− 1, j − w[i]))
end if
F [i, j] = x
end if
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0
w1 = 3, v1 = 25 1 0 -1 -1 -1 -1 -1 -1
w2 = 2, v2 = 20 2 0 -1 -1 -1 -1 -1 -1
w3 = 1, v3 = 15 3 0 -1 -1 -1 -1 -1 -1
w4 = 4, v4 = 40 4 0 � -1 -1 -1 -1 �
w5 = 5, v5 = 50 5 0 -1 -1 -1 -1 -1 �
(RMIT University) Dynamic Programming Lecture 8 62 / 100
DP Knapsack Problem – Top-Down
if F [i, j] < 0 then
if j < w[i] then
x = MFKnapsack(i− 1, j)
else
x = max( MFKnapsack(i− 1, j), v[i]+ MFKnapsack(i− 1, j − w[i]))
end if
F [i, j] = x
end if
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0
w1 = 3, v1 = 25 1 0 -1 -1 -1 -1 -1 -1
w2 = 2, v2 = 20 2 0 -1 -1 -1 -1 -1 -1
w3 = 1, v3 = 15 3 0 � -1 -1 -1 -1 -1
w4 = 4, v4 = 40 4 0 � -1 -1 -1 -1 �
w5 = 5, v5 = 50 5 0 -1 -1 -1 -1 -1 �
(RMIT University) Dynamic Programming Lecture 8 63 / 100
DP Knapsack Problem – Top-Down
if F [i, j] < 0 then
if j < w[i] then
x = MFKnapsack(i− 1, j)
else
x = max( MFKnapsack(i− 1, j), v[i]+ MFKnapsack(i− 1, j − w[i]))
end if
F [i, j] = x
end if
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0
w1 = 3, v1 = 25 1 0 � -1 -1 -1 -1 -1
w2 = 2, v2 = 20 2 0 � -1 -1 -1 -1 -1
w3 = 1, v3 = 15 3 0 � -1 -1 -1 -1 -1
w4 = 4, v4 = 40 4 0 � -1 -1 -1 -1 �
w5 = 5, v5 = 50 5 0 -1 -1 -1 -1 -1 �
(RMIT University) Dynamic Programming Lecture 8 64 / 100
DP Knapsack Problem – Top-Down
if F [i, j] < 0 then
if j < w[i] then
x = MFKnapsack(i− 1, j)
else
x = max( MFKnapsack(i− 1, j), v[i]+ MFKnapsack(i− 1, j − w[i]))
end if
F [i, j] = x
end if
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0
w1 = 3, v1 = 25 1 0 � -1 -1 -1 -1 -1
w2 = 2, v2 = 20 2 0 � -1 -1 -1 -1 -1
w3 = 1, v3 = 15 3 0 � � -1 -1 -1 �
w4 = 4, v4 = 40 4 0 � -1 -1 -1 -1 �
w5 = 5, v5 = 50 5 0 -1 -1 -1 -1 -1 �
(RMIT University) Dynamic Programming Lecture 8 65 / 100
DP Knapsack Problem – Top-Down
if F [i, j] < 0 then
if j < w[i] then
x = MFKnapsack(i− 1, j)
else
x = max( MFKnapsack(i− 1, j), v[i]+ MFKnapsack(i− 1, j − w[i]))
end if
F [i, j] = x
end if
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0
w1 = 3, v1 = 25 1 0 � � � � � �
w2 = 2, v2 = 20 2 0 � � -1 -1 � �
w3 = 1, v3 = 15 3 0 � � -1 -1 -1 �
w4 = 4, v4 = 40 4 0 � -1 -1 -1 -1 �
w5 = 5, v5 = 50 5 0 -1 -1 -1 -1 -1 �
(RMIT University) Dynamic Programming Lecture 8 66 / 100
DP Knapsack Problem – Top-Down
if F [i, j] < 0 then
if j < w[i] then
x = MFKnapsack(i− 1, j)
else
x = max( MFKnapsack(i− 1, j), v[i]+ MFKnapsack(i− 1, j − w[i]))
end if
F [i, j] = x
end if
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0
w1 = 3, v1 = 25 1 0 � � � � � �
w2 = 2, v2 = 20 2 0 � � – – � �
w3 = 1, v3 = 15 3 0 � � – – – �
w4 = 4, v4 = 40 4 0 � – – – – �
w5 = 5, v5 = 50 5 0 – – – – – �
(RMIT University) Dynamic Programming Lecture 8 67 / 100
DP Knapsack Problem – Top-Down
if F [i, j] < 0 then
if j < w[i] then
x = MFKnapsack(i− 1, j)
else
x = max( MFKnapsack(i− 1, j), v [i]+ MFKnapsack(i− 1, j − w[i]))
end if
F [i, j] = x
end if
↓ i W → 0 1 2 3 4 5 6
0 0 0 0 0 0 0
w1 = 3, v1 = 25 1 0 0 0 25 25 25 25
w2 = 2, v2 = 20 2 0 0 20 – – 45 45
w3 = 1, v3 = 15 3 0 15 20 – – – 60
w4 = 4, v4 = 40 4 0 15 – – – – 60
w5 = 5, v5 = 50 5 0 – – – – – 65
(RMIT University) Dynamic Programming Lecture 8 68 / 100
Top-Down vs. Bottom-Up
In general, when to use top-down or bottom-up dynamic programming?
Top-down incurs additional space and time cost of maintaining
recursion stack. Hence:
• Bottom-up: When the final problem instance requires most or all
of the sub-problem instances to be solved, e.g., edit distance.
• Top-down: When the final problem instance only requires a subset
of the sub-problem instances to be solved, e.g., possibly
knapsack.
(RMIT University) Dynamic Programming Lecture 8 69 / 100
Top-Down vs. Bottom-Up
In general, when to use top-down or bottom-up dynamic programming?
Top-down incurs additional space and time cost of maintaining
recursion stack. Hence:
• Bottom-up: When the final problem instance requires most or all
of the sub-problem instances to be solved, e.g., edit distance.
• Top-down: When the final problem instance only requires a subset
of the sub-problem instances to be solved, e.g., possibly
knapsack.
(RMIT University) Dynamic Programming Lecture 8 69 / 100
Top-Down vs. Bottom-Up
In general, when to use top-down or bottom-up dynamic programming?
Top-down incurs additional space and time cost of maintaining
recursion stack. Hence:
• Bottom-up: When the final problem instance requires most or all
of the sub-problem instances to be solved, e.g., edit distance.
• Top-down: When the final problem instance only requires a subset
of the sub-problem instances to be solved, e.g., possibly
knapsack.
(RMIT University) Dynamic Programming Lecture 8 69 / 100
Overview
1 Overview
2 Edit Distance
3 Knapsack Problem
4 Warshall’s Algorithm
5 Summary
(RMIT University) Dynamic Programming Lecture 8 70 / 100
Transitive closure
Definition
The transitive closure of a directed graph with n vertices is and n × n
boolean matrix, used to determine if there is a path between any pair
of vertices in the graph. Yes/True/1 if there is a path, No/False/0 if not.
We want to determine for all pairs of vertices.
a b
c d
A =
a b c d
a 0 1 0 0
b 0 0 0 1
c 0 0 0 0
d 1 0 1 0
T =
a b c d
a 1 1 1 1
b 1 1 1 1
c 0 0 0 0
d 1 1 1 1
(a) digraph (b) adjacency matrix (c) transitive closure
(RMIT University) Dynamic Programming Lecture 8 71 / 100
Transitive closure
Definition
The transitive closure of a directed graph with n vertices is and n × n
boolean matrix, used to determine if there is a path between any pair
of vertices in the graph. Yes/True/1 if there is a path, No/False/0 if not.
We want to determine for all pairs of vertices.
a b
c d
A =
a b c d
a 0 1 0 0
b 0 0 0 1
c 0 0 0 0
d 1 0 1 0
T =
a b c d
a 1 1 1 1
b 1 1 1 1
c 0 0 0 0
d 1 1 1 1
(a) digraph (b) adjacency matrix (c) transitive closure
(RMIT University) Dynamic Programming Lecture 8 71 / 100
Warshall’s Algorithm
Why compute transitive closure?
• In spreadsheet software, when a cell is changed, what are the
other cells whose computation directly or indirectly depend on it
(i.e., might need to update also)?
• Critical communication systems: Is it possible for any nodes in this
system to communicate with any other node?
(RMIT University) Dynamic Programming Lecture 8 72 / 100
Warshall’s Algorithm – Sketch
Idea: Progressively use each vertex as an intermediate to join two
paths.
If we know there is a path between vertex i to intermediate node k ,
and a path from intermediate node k to j , then we know there is a path
from i to j .
Warshall’s algorithm progressively considers using each vertex as the
intermediate, and when all the vertices have been considered as
intermediates, we have the transitive closure of the graph.
(RMIT University) Dynamic Programming Lecture 8 73 / 100
Warshall’s Algorithm – Sketch
Idea: Progressively use each vertex as an intermediate to join two
paths.
If we know there is a path between vertex i to intermediate node k ,
and a path from intermediate node k to j , then we know there is a path
from i to j .
Warshall’s algorithm progressively considers using each vertex as the
intermediate, and when all the vertices have been considered as
intermediates, we have the transitive closure of the graph.
(RMIT University) Dynamic Programming Lecture 8 73 / 100
Warshall’s Algorithm – Sketch
Idea: Construct the transitive closure of a given digraph with n vertices
through a series of n-by-n binary matrices:
R(0), . . . ,R(k−1),R(k), . . . ,R(n).
The element R[i, j] in the i·th row and the j·th column is equal to 1 if
there is a directed path from the i·th vertex to the j·th vertex.
R(0) is the initial adjacency matrix and R(n) specifies the complete
transitivity between vertices.
Updating from Rk−1 to Rk :
• If a path exists between i and j vertices, then it remains there, i.e.,
if R[i, j](k−) = 1, it remains 1 (R[i, j](k) = 1).
• If we can use vertex k as intermediate vertex to traverse from i to j
vertices and we previously didn’t know there was such a path,
then update our transitivity information, i.e.,
• if R[i, j](k−) = 0, and R[i, k](k−) = 1 and R[k, j](k−) = 1, then
change R[i, j](k) to 1.
(RMIT University) Dynamic Programming Lecture 8 74 / 100
Warshall’s Algorithm – Sketch
Idea: Construct the transitive closure of a given digraph with n vertices
through a series of n-by-n binary matrices:
R(0), . . . ,R(k−1),R(k), . . . ,R(n).
The element R[i, j] in the i·th row and the j·th column is equal to 1 if
there is a directed path from the i·th vertex to the j·th vertex.
R(0) is the initial adjacency matrix and R(n) specifies the complete
transitivity between vertices.
Updating from Rk−1 to Rk :
• If a path exists between i and j vertices, then it remains there, i.e.,
if R[i, j](k−) = 1, it remains 1 (R[i, j](k) = 1).
• If we can use vertex k as intermediate vertex to traverse from i to j
vertices and we previously didn’t know there was such a path,
then update our transitivity information, i.e.,
• if R[i, j](k−) = 0, and R[i, k](k−) = 1 and R[k, j](k−) = 1, then
change R[i, j](k) to 1.
(RMIT University) Dynamic Programming Lecture 8 74 / 100
Warshall’s Algorithm – Sketch
Idea: Construct the transitive closure of a given digraph with n vertices
through a series of n-by-n binary matrices:
R(0), . . . ,R(k−1),R(k), . . . ,R(n).
The element R[i, j] in the i·th row and the j·th column is equal to 1 if
there is a directed path from the i·th vertex to the j·th vertex.
R(0) is the initial adjacency matrix and R(n) specifies the complete
transitivity between vertices.
Updating from Rk−1 to Rk :
• If a path exists between i and j vertices, then it remains there, i.e.,
if R[i, j](k−) = 1, it remains 1 (R[i, j](k) = 1).
• If we can use vertex k as intermediate vertex to traverse from i to j
vertices and we previously didn’t know there was such a path,
then update our transitivity information, i.e.,
• if R[i, j](k−) = 0, and R[i, k](k−) = 1 and R[k, j](k−) = 1, then
change R[i, j](k) to 1.
(RMIT University) Dynamic Programming Lecture 8 74 / 100
Warshall’s Algorithm – Sketch
Idea: Construct the transitive closure of a given digraph with n vertices
through a series of n-by-n binary matrices:
R(0), . . . ,R(k−1),R(k), . . . ,R(n).
The element R[i, j] in the i·th row and the j·th column is equal to 1 if
there is a directed path from the i·th vertex to the j·th vertex.
R(0) is the initial adjacency matrix and R(n) specifies the complete
transitivity between vertices.
Updating from Rk−1 to Rk :
• If a path exists between i and j vertices, then it remains there, i.e.,
if R[i, j](k−) = 1, it remains 1 (R[i, j](k) = 1).
• If we can use vertex k as intermediate vertex to traverse from i to j
vertices and we previously didn’t know there was such a path,
then update our transitivity information, i.e.,
• if R[i, j](k−) = 0, and R[i, k](k−) = 1 and R[k, j](k−) = 1, then
change R[i, j](k) to 1.
(RMIT University) Dynamic Programming Lecture 8 74 / 100
Warshall’s Algorithm – Sketch
Idea: Construct the transitive closure of a given digraph with n vertices
through a series of n-by-n binary matrices:
R(0), . . . ,R(k−1),R(k), . . . ,R(n).
The element R[i, j] in the i·th row and the j·th column is equal to 1 if
there is a directed path from the i·th vertex to the j·th vertex.
R(0) is the initial adjacency matrix and R(n) specifies the complete
transitivity between vertices.
Updating from Rk−1 to Rk :
• If a path exists between i and j vertices, then it remains there, i.e.,
if R[i, j](k−) = 1, it remains 1 (R[i, j](k) = 1).
• If we can use vertex k as intermediate vertex to traverse from i to j
vertices and we previously didn’t know there was such a path,
then update our transitivity information, i.e.,
• if R[i, j](k−) = 0, and R[i, k](k−) = 1 and R[k, j](k−) = 1, then
change R[i, j](k) to 1.
(RMIT University) Dynamic Programming Lecture 8 74 / 100
Warshall’s Algorithm – Sketch
Idea: Construct the transitive closure of a given digraph with n vertices
through a series of n-by-n binary matrices:
R(0), . . . ,R(k−1),R(k), . . . ,R(n).
The element R[i, j] in the i·th row and the j·th column is equal to 1 if
there is a directed path from the i·th vertex to the j·th vertex.
R(0) is the initial adjacency matrix and R(n) specifies the complete
transitivity between vertices.
Updating from Rk−1 to Rk :
• If a path exists between i and j vertices, then it remains there, i.e.,
if R[i, j](k−) = 1, it remains 1 (R[i, j](k) = 1).
• If we can use vertex k as intermediate vertex to traverse from i to j
vertices and we previously didn’t know there was such a path,
then update our transitivity information, i.e.,
• if R[i, j](k−) = 0, and R[i, k](k−) = 1 and R[k, j](k−) = 1, then
change R[i, j](k) to 1.
(RMIT University) Dynamic Programming Lecture 8 74 / 100
Warshall’s Algorithm – Example
R(0) =
1 2 3 4
1 0 1 0 0
2 0 0 0 1
3 0 0 0 0
4 1 0 1 0
Applying Warshall’s algorithm to calculate the transitive closure of a
digraph.
(RMIT University) Dynamic Programming Lecture 8 75 / 100
Warshall’s Algorithm – Example
R(1) =
1 2 3 4
1 0 1 0 0
2 0 0 0 1
3 0 0 0 0
4 1 0 1 0
Applying Warshall’s algorithm to calculate the transitive closure of a
digraph.
(RMIT University) Dynamic Programming Lecture 8 76 / 100
Warshall’s Algorithm – Example
R(1) =
1 2 3 4
1 0 1 0 0
2 0 0 0 1
3 0 0 0 0
4 1 0 1 0
Applying Warshall’s algorithm to calculate the transitive closure of a
digraph.
(RMIT University) Dynamic Programming Lecture 8 77 / 100
Warshall’s Algorithm – Example
R(1) =
1 2 3 4
1 0 1 0 0
2 0 0 0 1
3 0 0 0 0
4 1 1 1 0
Applying Warshall’s algorithm to calculate the transitive closure of a
digraph.
(RMIT University) Dynamic Programming Lecture 8 78 / 100
Warshall’s Algorithm – Example
R(2) =
1 2 3 4
1 0 1 0 0
2 0 0 0 1
3 0 0 0 0
4 1 1 1 0
Applying Warshall’s algorithm to calculate the transitive closure of a
digraph.
(RMIT University) Dynamic Programming Lecture 8 79 / 100
Warshall’s Algorithm – Example
R(2) =
1 2 3 4
1 0 1 0 0
2 0 0 0 1
3 0 0 0 0
4 1 1 1 0
Applying Warshall’s algorithm to calculate the transitive closure of a
digraph.
(RMIT University) Dynamic Programming Lecture 8 80 / 100
Warshall’s Algorithm – Example
R(2) =
1 2 3 4
1 0 1 0 1
2 0 0 0 1
3 0 0 0 0
4 1 1 1 0
Applying Warshall’s algorithm to calculate the transitive closure of a
digraph.
(RMIT University) Dynamic Programming Lecture 8 81 / 100
Warshall’s Algorithm – Example
R(2) =
1 2 3 4
1 0 1 0 1
2 0 0 0 1
3 0 0 0 0
4 1 1 1 0
Applying Warshall’s algorithm to calculate the transitive closure of a
digraph.
(RMIT University) Dynamic Programming Lecture 8 82 / 100
Warshall’s Algorithm – Example
R(2) =
1 2 3 4
1 0 1 0 1
2 0 0 0 1
3 0 0 0 0
4 1 1 1 1
Applying Warshall’s algorithm to calculate the transitive closure of a
digraph.
(RMIT University) Dynamic Programming Lecture 8 83 / 100
Warshall’s Algorithm – Example
R(3) =
1 2 3 4
1 0 1 0 1
2 0 0 0 1
3 0 0 0 0
4 1 1 1 1
Applying Warshall’s algorithm to calculate the transitive closure of a
digraph.
(RMIT University) Dynamic Programming Lecture 8 84 / 100
Warshall’s Algorithm – Example
R(4) =
1 2 3 4
1 0 1 0 1
2 0 0 0 1
3 0 0 0 0
4 1 1 1 1
Applying Warshall’s algorithm to calculate the transitive closure of a
digraph.
(RMIT University) Dynamic Programming Lecture 8 85 / 100
Warshall’s Algorithm – Example
R(4) =
1 2 3 4
1 0 1 0 1
2 0 0 0 1
3 0 0 0 0
4 1 1 1 1
Applying Warshall’s algorithm to calculate the transitive closure of a
digraph.
(RMIT University) Dynamic Programming Lecture 8 86 / 100
Warshall’s Algorithm – Example
R(4) =
1 2 3 4
1 1 1 0 1
2 0 0 0 1
3 0 0 0 0
4 1 1 1 1
Applying Warshall’s algorithm to calculate the transitive closure of a
digraph.
(RMIT University) Dynamic Programming Lecture 8 87 / 100
Warshall’s Algorithm – Example
R(4) =
1 2 3 4
1 1 1 0 1
2 0 0 0 1
3 0 0 0 0
4 1 1 1 1
Applying Warshall’s algorithm to calculate the transitive closure of a
digraph.
(RMIT University) Dynamic Programming Lecture 8 88 / 100
Warshall’s Algorithm – Example
R(4) =
1 2 3 4
1 1 1 1 1
2 0 0 0 1
3 0 0 0 0
4 1 1 1 1
Applying Warshall’s algorithm to calculate the transitive closure of a
digraph.
(RMIT University) Dynamic Programming Lecture 8 89 / 100
Warshall’s Algorithm – Example
R(4) =
1 2 3 4
1 1 1 1 1
2 0 0 0 1
3 0 0 0 0
4 1 1 1 1
Applying Warshall’s algorithm to calculate the transitive closure of a
digraph.
(RMIT University) Dynamic Programming Lecture 8 90 / 100
Warshall’s Algorithm – Example
R(4) =
1 2 3 4
1 1 1 1 1
2 1 0 0 1
3 0 0 0 0
4 1 1 1 1
Applying Warshall’s algorithm to calculate the transitive closure of a
digraph.
(RMIT University) Dynamic Programming Lecture 8 91 / 100
Warshall’s Algorithm – Example
R(4) =
1 2 3 4
1 1 1 1 1
2 1 0 0 1
3 0 0 0 0
4 1 1 1 1
Applying Warshall’s algorithm to calculate the transitive closure of a
digraph.
(RMIT University) Dynamic Programming Lecture 8 92 / 100
Warshall’s Algorithm – Example
R(4) =
1 2 3 4
1 1 1 1 1
2 1 1 0 1
3 0 0 0 0
4 1 1 1 1
Applying Warshall’s algorithm to calculate the transitive closure of a
digraph.
(RMIT University) Dynamic Programming Lecture 8 93 / 100
Warshall’s Algorithm – Example
R(4) =
1 2 3 4
1 1 1 1 1
2 1 1 0 1
3 0 0 0 0
4 1 1 1 1
Applying Warshall’s algorithm to calculate the transitive closure of a
digraph.
(RMIT University) Dynamic Programming Lecture 8 94 / 100
Warshall’s Algorithm – Example
R(4) =
1 2 3 4
1 1 1 1 1
2 1 1 1 1
3 0 0 0 0
4 1 1 1 1
Applying Warshall’s algorithm to calculate the transitive closure of a
digraph.
(RMIT University) Dynamic Programming Lecture 8 95 / 100
Warshall’s Algorithm – Example
T = R(4) =
1 2 3 4
1 1 1 1 1
2 1 1 1 1
3 0 0 0 0
4 1 1 1 1
Applying Warshall’s algorithm to calculate the transitive closure of a
digraph.
(RMIT University) Dynamic Programming Lecture 8 96 / 100
Warshall’s Algorithm – Pseudocode
ALGORITHM Warshall (A[1 . . . n,1 . . . n])
/* Compute the transitive closure of a graph using Warshall’s algorithm.
*/
/* INPUT : The adjacency matrix A of a digraph with n vertices. */
/* OUTPUT : The transitive closure of the digraph. */
1: R(0) = A
2: for k = 1 to n do
3: for i = 1 to n do
4: for j = 1 to n do
5: R(k)[i , j] = R(k−1)[i, j] or (R(k−1)[i, k] and (R(k−1)[k, j])
6: end for
7: end for
8: end for
9: return R(n)
(RMIT University) Dynamic Programming Lecture 8 97 / 100
Warshall’s Algorithm
• The time efficiency of Warshall’s algorithm is Θ(n3) and uses
Θ(n2) space.
• For sparse graphs, the transitive closure can be calculated more
efficiently using an adjacency list representation with a depth-first
or breadth-first traversal.
• Using a DFS or BFS traversal with n vertices and m edges takes
Θ(n + m) time. Doing it n times takes Θ(n2 + nm) time.
• If the graph is sparse, i.e. m ≈ n, the time efficiency is Θ(n2).
(RMIT University) Dynamic Programming Lecture 8 98 / 100
Overview
1 Overview
2 Edit Distance
3 Knapsack Problem
4 Warshall’s Algorithm
5 Summary
(RMIT University) Dynamic Programming Lecture 8 99 / 100
Summary
• Described dynamic programming and how it is useful to solve
problems that require solving sub-problems multiple times.
• Examples:
• Coin-row problem
• Computing the edit distance
• Knapsack – Bottom up and top down
• Transitive closure – Warshall’s algorithm
(RMIT University) Dynamic Programming Lecture 8 100 / 100
Overview
Edit Distance
Knapsack Problem
Warshall's Algorithm
Summary