CSCI 570 – Fall 2020 – HW7
Due October 21st 11:59pm 2020
1 Graded Problems
1. Let us think of character strings as sequences of characters. Given two se- quences X =
1
Here is the recursive formulation for the dynamic programming solution: Basis: If either sequence is empty, then the longest common subsequence is
empty. Therefore, lcs(i, 0) = lcs(j, 0) = 0.
Recursive formulation: lcs(i, j) =
• 0ifi−0orj=0
• lcs(i−1,j−1)+1ifi,j>0andxi =yj
• max(lcs(i−1,j),lcs(i,j−1)ifi,j>0andxi ̸=yj
2. This problem involves the question of determining the optimal sequence for performing a series of operations. This general class of problem is important in compiler design for code optimization and in databases for query optimization.
Some background on matrix multiplication:
let A be a p×q matrix and B be a q×r matrix: We know that for matrix multiplication, the number of columns of A must equal the number of rows of B. In particular for 1 ≤ i ≤ p and 1 ≤ j ≤ r, for C = AB we have:
q
C[i, j] = A[i, k]B[k, j] k=1
This corresponds to the (hopefully familiar) rule that the [i, j] entry of C is the dot product of the ith (horizontal) row of A and the jth (vertical) column of B. Observe that there are pr total entries in C and each takes O(q) time to compute, thus the total time to multiply these two matrices is proportional to the product of the dimensions, pqr.
We also know that matrix multiplication is associative, meaning:
(AB)C = A(BC)
Notice the difference between the order of operations. On the right side we have to compute AB first and then multiply it by C while on the left side BC is computed first and then it is multiplied by A.
Suppose that we wish to multiply a series of matrices C = A1A2…An
Devise a dynamic programming algorithm that determines the best order to perform the multiplications. Note that this algorithm does not perform the multiplications, it just determines the best order in which to perform the multiplications and the total number of operations.
2
Solution:
let m(i,j) denote the minimum number of multiplications needed to compute Ai..j . The desired total cost of multiplying all the matrices is that of computing the entire chain A1..n,
Recursive formulation for dynamic programming:
Basis: Observe that if i = j then the sequence contains only one matrix, and
so the cost is0. (There is nothing to multiply.) Thus, m(i, i) = 0.
Step: If i < j, then we are asking about the product Ai..j . This can be split into two groups Ai..k times Ak+1..j, by considering each k,i ≤ k < j The optimum times to compute Ai..k and Ak+1..j are, by definition, m(i,k) and m(k + 1, j), respectively. We may assume that these values have been computed previously and are already stored in our array. Since Ai..k is a pi−1 × pk matrix, and Ak+1..j is a pk × pj matrix, the time to multiply them is pi−1pkpj. This suggests the following recursive rule for computing m(i,j).
• m(i,i)=0
• m(i,j)=mini≤k
(c) O(NTK)
Practice Problems
4. Reading assignment: Kleinberg and Tardos, Chapter 6.
4