CS作业代写 THE UNIVERSITY OF NEW SOUTH WALES

THE UNIVERSITY OF NEW SOUTH WALES
5. DYNAMIC PROGRAMMING
Raveen de Silva,
office: K17 202

Copyright By PowCoder代写 加微信 powcoder

Course Admin: ,
School of Computer Science and Engineering UNSW Sydney
Term 1, 2022

Table of Contents
1. Introduction
2. Example Problems
3. Applications to graphs 4. Puzzle

What is dynamic programming?
The main idea is to solve a large problem recursively by building from (carefully chosen) subproblems of smaller size.
Optimal substructure property
We must choose subproblems so that the optimal solutions to subproblems can be combined into an optimal solution for the full problem.

Why is dynamic programming useful?
Recently we discussed greedy algorithms, where the problem is viewed as a sequence of stages and we consider only the locally optimal choice at each stage.
We saw that some greedy algorithms are incorrect, i.e. they fail to construct a globally optimal solution.
Also, greedy algorithms are often unhelpful for certain types of problems, such as “count the number of ways to . . . ”.
Dynamic programming can be used to efficiently consider all the options at each stage.

Why is dynamic programming efficient?
We have already seen one problem-solving paradigm that used recursion: divide-and-conquer.
D&C aims to break a large problem into disjoint subproblems, solve those subproblems recursively and recombine.
However, DP is characterised by overlapping subproblems.
Overlapping subproblems property
We must choose subproblems so that the same subproblem occurs several times in the recursion tree. When we solve a subproblem, we store the result so that subsequent instances of the same subproblem can be answered by just looking up a value in a table.

The parts of a dynamic programming algorithm
A dynamic programming algorithm consists of three parts: a definition of the subproblems;
a recurrence relation, which determines how the solutions to smaller subproblems are combined to solve a larger subproblem, and
any base cases, which are the trivial subproblems – those for which the recurrence is not required.

Putting it all together
The original problem may be one of our subproblems, or it may be solved by combining results from several subproblems, in which case we must also describe this process.
Finally, we should be aware of the time complexity of our algorithm, which is usually given by multiplying the number of subproblems by the ‘average’ time taken to solve a subproblem using the recurrence.

Table of Contents
1. Introduction
2. Example Problems
3. Applications to graphs 4. Puzzle

Longest Increasing Subsequence
Instance: a sequence of n real numbers A[1..n].
Task: determine a subsequence (not necessarily contiguous) of maximum length, in which the values in the subsequence are strictly increasing.

Longest Increasing Subsequence
A natural choice for the subproblems is as follows: for each 1 ≤ i ≤ n, let P(i) be the problem of determining the length of the longest increasing subsequence of A[1..i].
However, it is not clear how to relate these subproblems to each other.
A more convenient specification involves Q(i), the problem of determining opt(i), the length of the longest increasing subsequence of A[1..i] including the last element A[i].
Note that the overall solution is recovered by max{opt(i) | 1 ≤ i ≤ n}.

Longest Increasing Subsequence
We will try to solve Q(i) by extending the sequence which solves Q(j) for some j < i. One example of a greedy algorithm using this framework is as follows. Solve Q(i) by extending the solution of Q(j) for j as close to i as possible, i.e. the largest j such that A[j] < A[i]. Design an instance of the problem for which this algorithm is not correct. Longest Increasing Subsequence Instead, assume we have solved all the subproblems for j < i. We now look for all indices j < i such that A[j] < A[i]. Among those we pick m so that opt(m) is maximal, and extend that sequence with A[i]. This forms the basis of our recurrence! The recurrence is not necessary if i = 1, as there are no previous indices to consider, so this is our base case. Longest Increasing Subsequence Subproblems: for each 1 ≤ i ≤ n, let Q(i) be the problem of determining opt(i), the maximum length of an increasing subsequence of A[1..i] which ends with A[i]. Recurrence: for i > 1,
opt(i)=max{opt(j)|j 1,
t(i)=max{t(j)|j 0, opt(i)=min{opt(i−vk)|1≤k≤n,vk ≤i}+1.
Base case: opt(0) = 0.

Making Change
There is no extra work required to recover the overall solution; it is just opt(C).
Each of C subproblems is solved in O(n) time, so the time complexity is O(nC).
Our algorithm is NOT a polynomial time algorithm in the length of the input, because C is represented by log C bits, while the running time is O(nC). There is no known polynomial time algorithm for this problem!

Making Change
Why does this produce an optimal solution for each amount i ≤ C?
Consider an optimal solution for some amount i, and say this solution includes at least one coin of denomination vm for some 1 ≤ m ≤ n.
Removing this coin must leave an optimal solution for the amount i − vm, again by our “cut-and-paste” argument.
By considering all coins of value at most i, we can pick m for which the optimal solution for amount i − vm uses the fewest coins.

Making Change
Suppose we were required to also determine the exact number of each coin required to make change for amount C.
In the ith slot of the table, we would store both opt(i) and the index m which minimises opt(i − vm).
We could then reconstruct the optimal solution by finding m1 stored in slot i, then finding m2 stored in slot i − vm1, then look at m3 stored in slot i −vm1 −vm2, and so on.
We denote the m that minimises opt(i − vm) by
argmin opt(i − vm ). 1≤m≤n

Integer Knapsack Problem (Duplicate Items Allowed)
Instance: You have n types of items; all items of kind i are identical and of weight wi and value vi . You can take any number of items of each kind. You also have a knapsack of capacity C.
Task: Choose a combination of available items which all fit in the knapsack and whose value is as large as possible.

Integer Knapsack Problem (Duplicate Items Allowed)
Similarly to the previous problem, we solve for each total weight up to C.
Assume we have solved the problem for all total weights j < i. We now consider each type of item, the kth of which has weight wk . If this item is included, we would fill the remaining weight with the already computed optimal solution for i − wk . We choose the m which maximises the total value of the optimal solution for i − wm plus an item of type m, to obtain a packing of total weight i of the highest possible value. Integer Knapsack Problem (Duplicate Items Allowed) Subproblems: for each 0 ≤ i ≤ C, let P(i) be the problem of determining opt(i), the maximum value that can be achieved using up to i units of weight, and m(i), the type of some item in such a collection. Recurrence: for i > 0,
m(i)= argmax (opt(i−wk)+vk)
1≤k ≤n,wk ≤i
􏰒0 if m(i) is undefined opt(i) = opt 􏰔i − wm(i)􏰕 + vm(i) otherwise.
Base case: opt(0) = 0, m(0) undefined.

Integer Knapsack Problem (Duplicate Items Allowed)
The overall solution is opt(C), as the optimal knapsack can hold up to C units of weight.
Each of C subproblems is solved in O(n), for a time complexity of O(nC).
Again, our algorithm is NOT polynomial in the length of the input.

Integer Knapsack Problem (Duplicate Items NOT Allowed)
Instance: You have n items (some of which can be identical), the ith of which has weight wi and value vi. You also have a knapsack of capacity C.
Task: Choose a combination of available items which all fit in the knapsack and whose value is as large as possible.

Integer Knapsack Problem (Duplicate Items NOT Allowed)
Let’s use the same subproblems as before, and try to develop a recurrence.
If we know the optimal solution for each total weight j < i, can we deduce the optimal solution for weight i? No! If we begin our solution for weight i with item k, we have i − wk remaining weight to fill. However, we did not record whether item k was itself used in the optimal solution for that weight. Integer Knapsack Problem (Duplicate Items NOT Allowed) At this point you may object that we can in fact reconstruct the optimal solution for total weight i − wk by storing the values m(i) as we did earlier, and then backtrack. This unfortunately has two flaws: 1. the optimal solution for i − wk is not necessarily unique, so we may have recorded a selection including item k when a selection of equal value without item k also exists, and 2. if all optimal solutions for i − wk use item k, it is still possible that the best solution for i combines item k with some suboptimal solution for i − wk . The underlying issue is that with this choice of subproblems, this problem does not have the optimal substructure property. Integer Knapsack Problem (Duplicate Items NOT Allowed) When we are unable to form a correct recurrence between our chosen subproblems, this usually indicates that the subproblem specification is inadequate. What extra parameters are required in our subproblem specification? We would like to know the optimal solution for each weight without using item k. Directly adding this information to the subproblem specification still doesn’t lead to a useful recurrence. How could we capture it less directly? Integer Knapsack Problem (Duplicate Items NOT Allowed) For each total weight i, we will find the optimal solution using only the first k items. We can take cases on whether item k is used in the solution: if so, we have i − wk remaining weight to fill using the first k − 1 items, and otherwise, we must fill all i units of weight with the first k − 1 items. Integer Knapsack Problem (Duplicate Items NOT Allowed) Subproblems: for 0 ≤ i ≤ C and 0 ≤ k ≤ n, let P(i,k) be the problem of determining opt(i,k), the maximum value that can be achieved using up to i units of weight and using only the first k items, and m(i,k), the (largest) index of an item in such a collection. Recurrence: for i > 0 and 1 ≤ k ≤ n,
opt(i , k ) = max (opt(i , k − 1), opt(i − wk , k − 1) + vk ) ,
withm(i,k)=m(i,k−1)inthefirstcaseandk inthesecond.
Base cases: if i = 0 or k = 0, then opt(i,k) = 0 and m(i,k) is undefined.

Integer Knapsack Problem (Duplicate Items NOT Allowed)
We need to be careful about the order in which we solve the subproblems.
When we get to P(i,k), the recurrence requires us to have already solved P(i,k −1) and P(i −wk,k −1).
This is guaranteed if we subproblems P(i,k) in increasing order of k, then increasing order of capacity i.

Integer Knapsack Problem (Duplicate Items NOT Allowed)
The overall solution is opt(C,n).
Each of O(nC) subproblems is solved in constant time, for a time complexity of O(nC).

Balanced Partition
Instance: a set of n positive integers xi .
Task: partition these integers into two subsets S1 and S2 with sums Σ1 and Σ2 respectively, so as to minimise |Σ1 − Σ2|.

Balanced Partition
Suppose without loss of generality that Σ1 ≥ Σ2.
Let Σ = x1 + . . . + xn, the sum of all integers in the set.
Observe that Σ1 + Σ2 = Σ, which is a constant, and upon rearranging it follows that
􏰅Σ 􏰆 Σ1−Σ2=2 2−Σ2 .
So, all we have to do is find a subset S2 of these numbers with total sum as close to Σ/2 as possible, but not exceeding it.

Balanced Partition
For each integer xi in the set, construct an item with both weight and value equal to xi .
Consider the knapsack problem (with duplicate items not allowed), with items as specified above and knapsack capacity Σ/2.
The best packing of this knapsack produces an optimally balanced partition, with set S1 given by the items outside the knapsack and set S2 given by the items in the knapsack.

Assembly line scheduling
Instance: You are given two assembly lines, each consisting of n workstations. The kth workstation on each assembly line performs the kth of n jobs.
To bring a new product to the start of assembly line i takes si units of time.
To retrieve a finished product from the end of assembly line i takes fi units of time.

Assembly line scheduling
Problem (continued)
On assembly line i , the k th job takes ai ,k units of time to complete.
To move the product from station k on assembly line i to station k + 1 on the other line takes ti ,k units of time.
There is no time required to continue from station k to station k + 1 on the same line.
Task: Find a fastest way to assemble a product using both lines as necessary.

Assembly line scheduling
a1,1 … a1,k−1 a1,k … a1,n
s1 t1,1 start
t2,n−1 f2 a2,1 … a2,k−1 a2,k … a2,n
t1,n−1 f1 finish

Assembly line scheduling
a1,k −1 a1,k t1,k −1
t2,k −2 a2,k −2
a2,k −1 a2,k

Assembly line scheduling
The natural choice of subproblems is to have one corresponding to each workstation.
To form a recurrence, we should consider the ways of getting to workstation k on assembly line i.
We could have come from workstation k − 1 on either line, after completing the previous job.
The exception is the first workstation, which leads to the base case.

Assembly line scheduling
Subproblems: for i ∈ {1,2} and 1 ≤ k ≤ n, let

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com