CS代写 THE UNIVERSITY OF NEW SOUTH WALES

THE UNIVERSITY OF NEW SOUTH WALES
6. DYNAMIC PROGRAMMING
Raveen de Silva,
office: K17 202

Copyright By PowCoder代写 加微信 powcoder

Course Admin: ,
School of Computer Science and Engineering UNSW Sydney
Term 2, 2022

Table of Contents
1. Introduction
2. Example Problems
3. Applications to graphs 4. Puzzle

What is dynamic programming?
The main idea is to solve a large problem recursively by building from (carefully chosen) subproblems of smaller size.
Optimal substructure property
We must choose subproblems in such a way that
optimal solutions to subproblems can be combined into an optimal solution for the full problem.

Why is dynamic programming useful?
Recently we discussed greedy algorithms, where the problem is viewed as a sequence of stages and we consider only the locally optimal choice at each stage.
We saw that some greedy algorithms are incorrect, i.e. they fail to construct a globally optimal solution.
Also, greedy algorithms are unhelpful for certain types of problems, such as enumeration (“count the number of ways to …”).
Dynamic programming can be used to efficiently consider all the options at each stage.

Why is dynamic programming efficient?
We have already seen one problem-solving paradigm that used recursion: divide-and-conquer.
D&C aims to break a large problem into disjoint subproblems, solve those subproblems recursively and recombine.
However, DP is characterised by overlapping subproblems.

Why is dynamic programming efficient?
Overlapping subproblems property
We must choose subproblems in such a way that
the same subproblem occurs several times in the recursion tree.
When we solve a subproblem, we store the result so that subsequent instances of the same subproblem can be answered by just looking up a value in a table.

The parts of a dynamic programming algorithm
A dynamic programming algorithm consists of three parts: a definition of the subproblems;
a recurrence relation, which determines how the solutions to smaller subproblems are combined to solve a larger subproblem, and
any base cases, which are the trivial subproblems – those for which the recurrence is not required.

Putting it all together
The original problem may be one of our subproblems, or it may be solved by combining results from several subproblems, in which case we must also describe this process.
Finally, we should be aware of the time complexity of our algorithm, which is usually given by multiplying the number of subproblems by the ‘average’ time taken to solve a subproblem using the recurrence.

Table of Contents
1. Introduction
2. Example Problems
3. Applications to graphs 4. Puzzle

Longest Increasing Subsequence
Instance: a sequence of n real numbers A[1..n].
Task: determine a subsequence (not necessarily contiguous) of maximum length, in which the values in the subsequence are strictly increasing.

Longest Increasing Subsequence
A natural choice for the subproblems is as follows: for each 1 ≤ i ≤ n, let P(i) be the problem of determining the length of the longest increasing subsequence of A[1..i].
However, it is not immediately obvious how to relate these subproblems to each other.
A more convenient specification involves Q(i), the problem of determining opt(i), the length of the longest increasing subsequence of A[1..i] ending at the last element A[i].
Note that the overall solution is recovered by max{opt(i) | 1 ≤ i ≤ n}.

Longest Increasing Subsequence
We will try to solve Q(i) by extending the sequence which solves Q(j) for some j < i. One example of a greedy algorithm using this framework is as follows. Solve Q(i) by extending the solution of Q(j) for j as close to i as possible, i.e. the largest j such that A[j] < A[i]. Design an instance of the problem for which this algorithm is not correct. Longest Increasing Subsequence Instead, assume we have solved all the subproblems for j < i. We now look for all indices j < i such that A[j] < A[i]. Among those we pick m so that opt(m) is maximal, and extend that sequence with A[i]. This forms the basis of our recurrence! The recurrence is not necessary if i = 1, as there are no previous indices to consider, so this is our base case. Longest Increasing Subsequence Subproblems: for each 1 ≤ i ≤ n, let Q(i) be the problem of determining opt(i), the maximum length of an increasing subsequence of A[1..i] which ends with A[i]. Recurrence: for i > 1,
opt(i)=max{opt(j)|j 1,
t(i)=max{t(j)|j 0, opt(i)=min{opt(i−vk)|1≤k≤n,vk ≤i}+1.
Base case: opt(0) = 0.

Making Change
There is no extra work required to recover the overall solution; it is just opt(C).
Each of C subproblems is solved in O(n) time, so the time complexity is O(nC).
Our algorithm is NOT a polynomial time algorithm in the length of the input, because C is represented by log C bits, while the running time is O(nC). There is no known polynomial time algorithm for this problem!

Making Change
Why does this produce an optimal solution for each amount i ≤ C?
Consider an optimal solution for some amount i, and say this solution includes at least one coin of denomination vm for some 1 ≤ m ≤ n.
Removing this coin must leave an optimal solution for the amount i − vm, again by our “cut-and-paste” argument.
By considering all coins of value at most i, we can pick m for which the optimal solution for amount i − vm uses the fewest coins.

Making Change
Suppose we were required to also determine the exact number of each coin required to make change for amount C.
In the ith slot of the table, we would store both opt(i) and the coin type k = pred(i ) which minimises opt(i − vk ).
Then pred(C) is a coin used in the optimal solution for total C , leaving C ′ = C − pred(C ) remaining. We then repeat, identifying another coin pred(C′) used in the optimal solution for total C′, and so on.
We denote the k that minimises opt(i − vk ) by
argmin opt(i − vk ). 1≤k ≤n

Integer Knapsack Problem (Duplicate Items Allowed)
Instance: You have n types of items; all items of kind i are identical and of weight wi and value vi . All weights are integers. You can take any number of items of each kind. You also have a knapsack of capacity C.
Task: Choose a combination of available items which all fit in the knapsack and whose value is as large as possible.

Integer Knapsack Problem (Duplicate Items Allowed)
Similarly to the previous problem, we solve for each total weight up to C.
Assume we have solved the problem for all total weights j < i. We now consider each type of item, the kth of which has weight wk . If this item is included, we would fill the remaining weight with the already computed optimal solution for i − wk . We choose the m which maximises the total value of the optimal solution for i − wm plus an item of type m, to obtain a packing of total weight i of the highest possible value. Integer Knapsack Problem (Duplicate Items Allowed) Subproblems: for each 0 ≤ i ≤ C, let P(i) be the problem of determining opt(i), the maximum value that can be achieved using up to i units of weight, and m(i), the type of some item in such a collection. Recurrence: for i > 0,
m(i)= argmax (opt(i−wk)+vk)
1≤k ≤n,wk ≤i
􏰇0 if m(i) is undefined opt(i) = opt 􏰍i − wm(i)􏰎 + vm(i) otherwise.
Base case: opt(0) = 0, m(0) undefined.

Integer Knapsack Problem (Duplicate Items Allowed)
The overall solution is opt(C), as the optimal knapsack can hold up to C units of weight.
Each of C subproblems is solved in O(n), for a time complexity of O(nC).
Again, our algorithm is NOT polynomial in the length of the input.

Integer Knapsack Problem (Duplicate Items NOT Allowed)
Instance: You have n items, the ith of which has weight wi and value vi . All weights are integers. You also have a knapsack of capacity C.
Task: Choose a combination of available items which all fit in the knapsack and whose value is as large as possible.

Integer Knapsack Problem (Duplicate Items NOT Allowed)
Let’s use the same subproblems as before, and try to develop a recurrence.
If we know the optimal solution for each total weight j < i, can we deduce the optimal solution for weight i? No! If we begin our solution for weight i with item k, we have i − wk remaining weight to fill. However, we did not record whether item k was itself used in the optimal solution for that weight. Integer Knapsack Problem (Duplicate Items NOT Allowed) At this point you may object that we can in fact reconstruct the optimal solution for total weight i − wk by storing the values m(i) as we did earlier, and then backtrack. This unfortunately has two flaws: 1. the optimal solution for i − wk is not necessarily unique, so we may have recorded a selection including item k when a selection of equal value without item k also exists, and 2. if all optimal solutions for i − wk use item k, it is still possible that the best solution for i combines item k with some suboptimal solution for i − wk . The underlying issue is that with this choice of subproblems, this problem does not have the optimal substructure property. Integer Knapsack Problem (Duplicate Items NOT Allowed) When we are unable to form a correct recurrence between our chosen subproblems, this usually indicates that the subproblem specification is inadequate. What extra parameters are required in our subproblem specification? We would like to know the optimal solution for each weight without using item k. Directly adding this information to the subproblem specification still doesn’t lead to a useful recurrence. How could we capture it less directly? Integer Knapsack Problem (Duplicate Items NOT Allowed) For each total weight i, we will find the optimal solution using only the first k items. We can take cases on whether item k is used in the solution: if so, we have i − wk remaining weight to fill using the first k − 1 items, and otherwise, we must fill all i units of weight with the first k − 1 items. Integer Knapsack Problem (Duplicate Items NOT Allowed) Subproblems: for 0 ≤ i ≤ C and 0 ≤ k ≤ n, let P(i,k) be the problem of determining opt(i,k), the maximum value that can be achieved using up to i units of weight and using only the first k items, and m(i,k), the (largest) index of an item in such a collection. Recurrence: for i > 0 and 1 ≤ k ≤ n,
opt(i , k ) = max (opt(i , k − 1), opt(i − wk , k − 1) + vk ) ,
withm(i,k)=m(i,k−1)inthefirstcaseandk inthesecond.
Base cases: if i = 0 or k = 0, then opt(i,k) = 0 and m(i,k) is undefined.

Integer Knapsack Problem (Duplicate Items NOT Allowed)
We need to be careful about the order in which we solve the subproblems.
When we get to P(i,k), the recurrence requires us to have already solved P(i,k −1) and P(i −wk,k −1).
This is guaranteed if we subproblems P(i,k) in increasing order of k, then increasing order of capacity i.

Integer Knapsack Problem (Duplicate Items NOT Allowed)
The overall solution is opt(C,n).
Each of O(nC) subproblems is solved in constant time, for a time complexity of O(nC).

Balanced Partition
Instance: a set of n positive integers xi .
Task: partition these integers into two subsets S1 and S2 with sums Σ1 and Σ2 respectively, so as to minimise |Σ1 − Σ2|.

Balanced Partition
Suppose without loss of generality that Σ1 ≥ Σ2.
Let Σ = x1 + . . . + xn, the sum of all integers in the set.
Observe that Σ1 + Σ2 = Σ, which is a constant, and upon rearranging it follows that
􏰁Σ 􏰂 Σ1−Σ2=2 2−Σ2 .
So, all we have to do is find a subset S2 of these numbers with total sum as close to Σ/2 as possible, but not exceeding it.

Balanced Partition
For each integer xi in the set, construct an item with both weight and value equal to xi .
Consider the knapsack problem (with duplicate items not allowed), with items as specified above and knapsack capacity Σ/2.
The best packing of this knapsack produces an optimally balanced partition, with set S1 given by the items outside the knapsack and set S2 given by the items in the knapsack.

Matrix chain multiplication
Let A and B be matrices. The matrix product AB exists if A has as many columns as B has rows: if A is m × n and B is n×p, then AB is m×p.
Each element of AB is the dot product of a row of A with a column of B, both of which have length n. Therefore
m × n × p multiplications are required to compute AB.
Matrix multiplication is associative, that is, for any three matrices of compatible sizes we have A(BC) = (AB)C.
However, the number of real number multiplications needed to obtain the product can be very different.

Matrix chain multiplication
Suppose A is 10×100, B is 100×5 and C is 5×50. (AB )(C )
(A)(B )(C )
10×100×5 10×5×50
100×5×50 10×100×50 (A) (BC)
Evaluating (AB)C involves only 7500 multiplications, but evaluating A(BC) requires 75000 multiplications!

Matrix chain multiplication
Instance: a compatible sequence of matrices A1A2…An, where Ai is of dimension si−1 ×si.
Task: group them in such a way as to minimise the total number of multiplications needed to find the product matrix.

Matrix chain multiplication
How many groupings are there?
The total number of different groupings satisfies the following recurrence (why?):
T (n) = 􏰈 T (i )T (n − i ),
i=1 with base case T (1) = 1.
One can show that the solution satisfies T(n) = Ω(2n).
Thus, we cannot efficiently do an ex

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com