Algorithms: COMP3121/9101
THE UNIVERSITY OF
NEW SOUTH WALES
Copyright By PowCoder代写 加微信 powcoder
Algorithms:
COMP3121/9101
School of Computer Science and Engineering
University of Wales
7: DYNAMIC PROGRAMMING
COMP3121/3821/9101/9801 1 / 41
Dynamic Programming
The main idea of Dynamic Programming: build an optimal
solution to the problem from optimal solutions for (carefully chosen)
smaller size subproblems.
Subproblems are chosen in a way which allows recursive construction of
optimal solutions to such subproblems from optimal solutions to smaller
size subproblems.
Efficiency of DP comes from the fact that the sets of subproblems needed
to solve larger problems heavily overlap; each subproblem is solved only
once and its solution is stored in a table for multiple use for solving
larger problems.
COMP3121/3821/9101/9801 2 / 41
Dynamic Programming: Activity Selection
Instance: A list of activities ai, 1 ≤ i ≤ n with starting times si and finishing
times fi. No two activities can take place simultaneously.
Task: Find a subset of compatible activities of maximal total duration.
Remember, we used the Greedy Method to solve a somewhat similar problem
of finding a subset with the largest possible number of compatible
activities, but the Greedy Method does not work for the present problem.
We start by sorting these activities by their finishing time into a
non-decreasing sequence, so will assume that f1 ≤ f2 ≤ . . . ≤ fn.
For every i ≤ n we solve the following subproblems:
Subproblem P (i): find a subsequence σi of the sequence of activities
Si = ⟨a1, a2, . . . , ai⟩ such that:
1 σi consists of non-overlapping activities;
2 σi ends with activity ai;
3 σi is of maximal total duration among all subsequences of Si which
satisfy 1 and 2.
Note: the role of Condition 2 is to simplify recursion.
COMP3121/3821/9101/9801 3 / 41
Dynamic Programming: Activity Selection
Let T (i) be the total duration of the optimal solution S(i) of the subproblem
For S(1) we choose a1; thus T (1) = f1 − s1;
Recursion: assuming that we have solved subproblems for all j < i and
stored them in a table, we let
T (i) = max{T (j) : j < i & fj ≤ si}+ fi − si
sk#1%%%%%%%%%%fk#2%%%%%%%sk%%fk#1%%%%%sj%%%%%%%fk%%%%%si%%%%%%%%fj%%%%%%%%%%%fi%
In the table, for every i, besides T (i), we also store π(i) = j for which the
above max is achieved:
π(i) = argmax{T (j) : j < i & fj ≤ si}
COMP3121/3821/9101/9801 4 / 41
Dynamic Programming: Activity Selection
Why does such a recursion produce optimal solutions to subproblems P (i)?
Let the optimal solution of subproblem P (i) be the sequence
S = (ak1 , ak2 , . . . , akm−1 , akm) where km = i;
We claim: the truncated subsequence S′ = (ak1 , ak2 , . . . , akm−1) is an optimal
solution to subproblem P (km−1), where km−1 < i.
Why? We apply the same “cut and paste” argument which we used to prove
the optimality of the greedy solutions!
If there were a sequence S∗ of a larger total duration than the duration of
sequence S′ and also ending with activity akm−1 , we could obtain a sequence Ŝ
by extending the sequence S∗ with activity akm and obtain a solution for
subproblem P (i) with a longer total duration than the total duration of
sequence S, contradicting the optimality of S.
Thus, the optimal solution S = (ak1 , ak2 , . . . , akm−1 , akm) for problem P (i)
(= P (akm)) is obtained from the optimal solution S
′ = (ak1 , ak2 , . . . , akm−1)
for problem P (akm−1) by extending it with akm
COMP3121/3821/9101/9801 5 / 41
Dynamic Programming: Activity Selection
Continuing with the solution of the problem, we now let
Tmax = max{T (i) : i ≤ n};
last = argmax{T (i) : i ≤ n}.
We can now reconstruct the optimal sequence which solves our problem from
the table of partial solutions, because in the ith slot of the table, besides T (i),
we also store π(i) = j, (j < i) such that the optimal solution of P (i) extends
the optimal solution of subproblem P (j).
Thus, the sequence in the reverse order is given by last, π(last), π(π(last)), . . ..
Why is such solution optimal, i.e., why looking for optimal solutions of P (i)
which must end with ai did not cause us to miss the optimal solution without
such an additional requirement?
Consider the optimal solution without such additional requirement, and
assume it ends with activity ak; then it would have been obtained as the
optimal solution of problem P (k).
Time complexity: having sorted the activities by their finishing times in time
O(n logn), we need to solve n subproblems P (i) for solutions ending in ai; for
each such interval ai we have to find all preceding compatible intervals and
their optimal solutions (to be looked up in a table). Thus, T (n) = O(n2).
COMP3121/3821/9101/9801 6 / 41
More Dynamic Programming Problems
Longest Increasing Subsequence: Given a sequence of n real
numbers A[1..n], determine a subsequence (not necessarily contiguous) of
maximum length in which the values in the subsequence are strictly
increasing.
Solution: For each i ≤ n we solve the following subproblems:
Subproblem P (i): Find a subsequence of the sequence A[1..i] of
maximum length in which the values are strictly increasing and which
ends with A[i].
Recursion: Assume we have solved the subproblems for all j < i, and
that we have put in a table S the values S[j] = ℓj which are the lengths
ℓj of maximal increasing sequences which end with A[j]
We now look for all A[m] such that m < i and such that A[m] < A[i].
Among those we pick m which produced the longest increasing
subsequence ending with A[m] and extend it with A[i] to obtain the
longest increasing subsequence which ends with A[i]:
COMP3121/3821/9101/9801 7 / 41
More Dynamic Programming Problems
ℓi = max{ℓm : m < i & A[m] < A[i]}+ 1
π(i) = argmax{ℓm : m < i & A[m] < A[i]}
We store in the ith slot of the table the length ℓi of the longest increasing
subsequence ending with A[i] and π(i) = m such that the optimal
solution for P (i) extends the optimal solution for P (m).
So, we have found for every i ≤ n the longest increasing subsequence of
the sequence A[1..i] which ends with A[i].
Finally, from all such subsequences we pick the longest one.
solution = max{ℓi : i ≤ n}
The end point of such a sequence can be obtained as
end = argmax{ℓi : i ≤ n}
COMP3121/3821/9101/9801 8 / 41
More Dynamic Programming Problems
We can now reconstruct the longest monotonically increasing sequence
by backtracking (thus getting it in the reverse order):
end, π(end), π(π(end)), . . .
Again, the condition for the sequence to end with A[i] is not restrictive
because if the optimal solution ends with some A[m], it would have been
constructed as the solution for P (m).
Time complexity: O(n2).
Exercise: (somewhat tough, but very useful) Design an algorithm for
solving this problem which runs in time n log n.
COMP3121/3821/9101/9801 9 / 41
More Dynamic Programming Problems
Making Change. You are given n types of coin denominations of values
v(1) < v(2) < ... < v(n) (all integers). Assume v(1) = 1, so that you can
always make change for any integer amount. Give an algorithm which
makes change for any given integer amount C with as few coins as
possible, assuming that you have an unlimited supply of coins of each
denomination.
Solution: DP Recursion on the amount C. We will fill a table containing
C many slots, so that an optimal solution for an amount i is stored in
If C = 1 the solution is trivial: just use one coin of denomination
Assume we have found optimal solutions for every amount j < i and now
want to find an optimal solution for amount i.
COMP3121/3821/9101/9801 10 / 41
More Dynamic Programming Problems
We consider optimal solutions opt(i− v(k)) for every amount of the form
i− v(k), where k ranges from 1 to n. (Recall v(1), . . . , v(n) are all of the
available denominations.)
Among all of these optimal solutions (which we find in the table we are
constructing recursively!) we pick one which uses the fewest number of coins,
say this is opt(i− v(m)) for some m, 1 ≤ m ≤ n.
We obtain an optimal solution opt(i) for amount i by adding to opt(i− v(m))
one coin of denomination v(m).
opt(i) = min{opt(i− v(k)) : 1 ≤ k ≤ n}+ 1
Why does this produce an optimal solution for amount i ≤ C?
Consider an optimal solution for amount i ≤ C; and say such solution includes
at least one coin of denomination v(m) for some 1 ≤ m ≤ n. But then
removing such a coin must produce an optimal solution for the amount
i− v(m) again by our cut-and-paste argument.
COMP3121/3821/9101/9801 11 / 41
More Dynamic Programming Problems
However, we do not know which coins the optimal solution includes, so we try
all the available coins and then pick m for which the optimal solution for
amount i− v(m) uses the fewest number of coins.
It is enough to store in the ith slot of the table such m and opt(i) because this
allows us to reconstruct the optimal solution by looking at m1 stored in the i
slot, then look at m2 stored in the slot i− v(m1), then look at m2 stored in the
slot i− v(m1)− v(m2), etc.
opt(C) is the solution we need.
Time complexity of our algorithm is nC.
Note: Our algorithm is NOT a polynomial time algorithm in the length of
the input, because the length of a representation of C is only logC, while the
running time is nC.
But this is the best what we can do...
COMP3121/3821/9101/9801 12 / 41
More Dynamic Programming Problems
Integer Knapsack Problem (Duplicate Items Allowed) You have n types of
items; all items of kind i are identical and of weight wi and value vi. You also have
a knapsack of capacity C. Choose a combination of available items which all fit in
the knapsack and whose value is as large as possible. You can take any number of
items of each kind.
Solution: DP recursion on the capacity C of the knapsack.
We build a table of optimal solutions for all knapsacks of capacities i ≤ C.
Assume we have solved the problem for all knapsacks of capacities j < i.
We now look at optimal solutions opt(i− wm) for all knapsacks of capacities
i− wm for all 1 ≤ m ≤ n;
Chose the one for which opt(i− wm) + vm is the largest;
Add to such optimal solution for the knapsack of size i− wm item m to obtain
a packing of a knapsack of size i of the highest possible value.
COMP3121/3821/9101/9801 13 / 41
More Dynamic Programming Problems
opt(i) = max{opt(i− wm) + vm : 1 ≤ m ≤ n}.
π(i) = argmax{opt(i− wm) + vm : 1 ≤ m ≤ n}.
After C many steps we obtain the optimal (minimal) number of coins opt(C).
Which coins are present in the optimal solution can again be obtained by
backtracking: if π(C) = k then the first object is ak of weight wk and value vk;
if π(C − wk) = m then the second object is am and so on.
Note that π(i) might not be uniquely determined; in case of multiple equally
good solutions we pick arbitrarily among them.
Again, our algorithm is NOT polynomial in the length of the input.
COMP3121/3821/9101/9801 14 / 41
More Dynamic Programming Problems
Integer Knapsack Problem (Duplicate Items NOT Allowed) You have
n items (some of which can be identical); item Ii is of weight wi and value vi.
You also have a knapsack of capacity C. Choose a combination of available
items which all fit in the knapsack and whose value is as large as possible.
This is an example of a “2D” recursion; we will be filling a table of size n× C,
row by row; subproblems P (i, c) for all i ≤ n and c ≤ C will be of the form:
chose from items I1, I2, . . . , Ii a subset which fits in a knapsack of capacity c
and is of the largest possible total value.
Fix now i ≤ n and c ≤ C and assume we have solved the subproblems for:
1 all j < i and all knapsacks of capacities from 1 to C;
2 for i we have solved the problem for all capacities d < c.
COMP3121/3821/9101/9801 15 / 41
More Dynamic Programming Problems
we now have two options: either we take item Ii or we do not;
so we look look at optimal solutions opt(i− 1, c− wi) and opt(i− 1, c);
if opt(i− 1, c− wi) + vi > opt(i− 1, c)
then opt(i, c) = opt(i− 1, c− wi) + vi;
else opt(i, c) = opt(i− 1, c).
Final solution will be given by opt(n,C).
COMP3121/3821/9101/9801 16 / 41
More Dynamic Programming Problems
Balanced Partition You have a set of n integers. Partition these
integers into two subsets such that you minimise |S1 − S2|, where S1 and
S2 denote the sums of the elements in each of the two subsets.
Solution: Let S be the total sum of all integers in the set; consider the
Knapsack problem (with duplicate items not allowed) with the knapsack
of size S/2 and with each integer xi of both size and value equal to xi.
Claim: the best packing of such knapsack produces optimally balanced
partition, with S1 being all the integers in the knapsack and S2 all the
integers left out of the knapsack.
Why? Since S = S1 + S2 we obtain
i.e. S2 − S1 = 2(S/2− S1).
Thus, minimising S/2− S1 will minimise S2 − S1.
So, all we have to do is find a subset of these numbers with the largest
possible total sum which fits inside a knapsack of size S/2.
COMP3121/3821/9101/9801 17 / 41
Dynamic Programming: Assembly line scheduling
Dynamic Programming:
Example 1: Assembly line scheduling. Instance:�
Instance: Two assembly lines with workstations for n jobs.
On the first assembly line the kth job takes a1,k (1 ≤ k ≤ n) units of time to
complete; on the second assembly line the same job takes a2,k units of time.
To move the product from station k − 1 on the first assembly line to station k
on the second line it takes t1,k−1 units of time.
Likewise, to move the product from station k − 1 on the second assembly line
to station k on the first assembly line it takes t2,k−1 units of time.
COMP3121/3821/9101/9801 18 / 41
Dynamic Programming: Assembly line scheduling
Dynamic Programming:
Example 1: Assembly line scheduling. Instance:�
To bring an unfinished product to the first assembly line it takes e1 units of
To bring an unfinished product to the second assembly line it takes e2 units of
To get a finished product from the first assembly line to the warehouse it takes
x1 units of time;
To get a finished product from the second assembly line to the warehouse it
takes x2 units.
Task: Find a fastest way to assemble a product using both lines as necessary.
COMP3121/3821/9101/9801 19 / 41
Dynamic Programming: Assembly line scheduling
Dynamic Programming:
Example 1: Assembly line scheduling. Instance:
Task: Find the fastest way through the factory?
Solution 1: brute force search calculates times for 2n
many possible paths, and compares them.
Solution 2: recursion:
a2(n-1) t2(n-1)
For each k ≤ n, we solve subproblems P (1, k) and P (2, k) by a simultaneous
recursion on k:
P (1, k) : find the minimal amount of time m(1, k) needed to finish the first k
jobs, such the kth job is finished on the kth workstation on the first assembly
P (2, k) : find the minimal amount of time m(2, k) needed to finish the first k
jobs, such the kth job is finished on the kth workstation on the second
assembly line.
COMP3121/3821/9101/9801 20 / 41
Dynamic Programming
Dynamic Programming:
Example 1: Assembly line scheduling. Instance:�
We solve P (1, k) and P (2, k) by a simultaneous recursion:
Initially m(1, 1) = e1 + a1,1 and m(2, 1) = e2 + a2,1.
Recursion:
m(1, k) = min{m(1, k − 1) + a1,k, m(2, k − 1) + t2,k−1 + a1,k}
m(2, k) = min{m(2, k − 1) + a2,k, m(1, k − 1) + t1,k−1 + a2,k}
Finally, after obtaining m(1, n) and m(2, n) we choose
opt = min{m(1, n) + x1, m(2, n) + x2}.
This problem is important because it has the same design logic as the Viterbi
algorithm, an extremely important algorithm for many fields such as speech
recognition, decoding convolutional codes in telecommunications etc, covered
in COMP4121 (Advanced Algorithms).
COMP3121/3821/9101/9801 21 / 41
Dynamic Programming: Matrix chain multiplication
For any three matrices of compatible sizes we have A(BC) = (AB)C.
However, the number of real number multiplications needed to perform in
order to obtain the matrix product can be very different:
Example 2: Matrix chain multiplication.
A = 10 X 100; B = 100 X 5; C = 5 X 50
To evaluate (AB)C we need
(10× 5)× 100 + (10× 50)× 5 = 5000 + 2500 = 7500 multiplications;
To evaluate A(BC) we need
(100× 50)× 5 + (10× 50)× 100 = 25000 + 50000 = 75000 multiplications!
COMP3121/3821/9101/9801 22 / 41
Dynamic Programming: Matrix chain multiplication
Problem Instance: A sequence of matrices A1A2…An;
Task: Group them in such a way as to minimise the total number of
multiplications needed to find the product matrix.
The total number of different distributions of brackets is equal to the number
of binary trees with n leaves.
The total number of different distributions of brackets satisfies the following
recursion (why?):
T (i)T (n− i)
One can show that the solution satisfies T (n) = Ω(2n).
Thus, we cannot do an exhaustive search for the optimal placement of the
COMP3121/3821/9101/9801 23 / 41
Dynamic Programming: Matrix chain multiplication
Problem Instance: A sequence of matrices A1A2…An;
Task: Group them in such a way as to minimise the total number of
multiplications needed to find the product matrix.
The subproblems P (i, j) to be considered are:
“group matrices AiAi+1…Aj−1Aj in such a way as to minimise the total
number of multiplications needed to find the product matrix”.
Note: this looks like it is a case of a “2D recursion, but we can actually do it
with a simple “linear” recursion.
We group such subproblems by the value of j − i and perform a recursion on
the value of j − i.
At each recursive step m we solve all subproblems P (i, j) for which j − i = m.
COMP3121/3821/9101/9801 24 / 41
Dynamic Programming: Matrix chain multiplication
Let m(i, j) denote the minimal number of multiplications needed to compute the
product AiAi+1…Aj−1Aj ; let also the size of matrix Ai be si−1 × si.
Recursion: We examine all possible ways to place the principal (outermost)
multiplication, splitting the chain into the product (Ai . . . Ak) · (Ak+1 . . . Aj).
Note that both k − i < j − i and j − (k + 1) < j − i; thus we have the solutions of the subproblems P (i, k) and P (k + 1, j) already computed and stored in slots k − i and j − (k + 1), respectively, which precede slot j − i we are presently filling. Note also that the matrix product Ai . . . Ak is a si−1 × sk matrix L and Ak+1 . . . Aj is a sk × sj matrix R. To multiply an si−1 × sk matrix L and an sk × sj matrix R it takes si−1sksj 程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com