CS计算机代考程序代写 compiler Bioinformatics information theory cache Hidden Markov Mode algorithm 6. DYNAMIC PROGRAMMING I

6. DYNAMIC PROGRAMMING I
‣ weighted interval scheduling ‣ segmented least squares
‣ knapsack problem
‣ RNA secondary structure
Lecture slides by Kevin Wayne
 Copyright © 2005 Pearson-Addison Wesley

http://www.cs.princeton.edu/~wayne/kleinberg-tardos
Last updated on 2/10/16 9:26 AM

Algorithmic paradigms
Greedy. Build up a solution incrementally, myopically optimizing
 some local criterion.

Divide-and-conquer. Break up a problem into independent subproblems,
 solve each subproblem, and combine solution to subproblems to form solution to original problem.

Dynamic programming. Break up a problem into a series of overlapping subproblems, and build up solutions to larger and larger subproblems.
fancy name for
caching away intermediate results in a table for later reuse
2

Dynamic programming history
Bellman. Pioneered the systematic study of dynamic programming in 1950s. 

E・tymology.
・Dynamic programming = planning over time.
・Secretary of Defense was hostile to mathematical research.
Bellman sought an impressive name to avoid confrontation.
THE THEORY OF DYNAMIC PROGRAMMING
RICHARD BELLMAN
1. Introduction. Before turning to a discussion of some representa- tive problems which will permit us to exhibit various mathematical features of the theory, let us present a brief survey of the funda- mental concepts, hopes, and aspirations of dynamic programming.
To begin with, the theory was created to treat the mathematical problems arising from the study of various multi-stage decision processes, which may roughly be described in the following way: We have a physical system whose state at any time / is determined by a set of quantities which we call state parameters, or state variables. At certain times, which may be prescribed in advance, or which may be determined by the process itself, we are called upon to make de- cisions which will affect the state of the system. These decisions are equivalent to transformations of the state variables, the choice of a decision being identical with the choice of a transformation. The out- come of the preceding decisions is to be used to guide the choice of future ones, with the purpose of the whole process that of maximizing some function of the parameters describing the final state.
Examples of processes fitting this loose description are furnished by virtually every phase of modern life, from the planning of indus- trial production lines to the scheduling of patients at a medical clinic ; from the determination of long-term investment programs for universities to the determination of a replacement policy for ma- chinery in factories; from the programming of training policies for skilled and unskilled labor to the choice of optimal purchasing and in- ventory policies for department stores and military establishments.
It is abundantly clear from the very brief description of possible applications that the problems arising from the study of these processes are problems of the future as well as of the immediate present.
3
Turning to a more precise discussion, let us introduce a small

Dynamic programming applications
Areas. I ft t ・Bioinformatics. AGTCAATOC ・Control theory.
・Information theory.

t tf tf AATCGAATCRC
・Operations research.
・Computer science: theory, graphics, AI, compilers, systems, …. ・…
Some famous dynamic programming algorithms.
・Unix diff for comparing two files.
・Viterbi for hidden Markov models.
・De Boor for evaluating spline curves.
・Smith-Waterman for genetic sequence alignment. ・Bellman-Ford for shortest path routing in networks. ・Cocke-Kasami-Younger for parsing context-free grammars. ・…
4

6. DYNAMIC PROGRAMMING I
‣ weighted interval scheduling ‣ segmented least squares
‣ knapsack problem
‣ RNA secondary structure

Weighted interval scheduling
Weighted interval scheduling problem.
・Job j starts at sj, finishes at fj, and has weight or value vj.
・Two jobs compatible if they don’t overlap.
・Goal: find maximum weight subset of mutually compatible jobs.
a
b
c
d
e
f
g
h
0 1 2 3 4 5 6 7 8 9 10 11
time
6

Earliest-finish-time first algorithm
E・arliest finish-time first.
・Consider jobs in ascending order of finish time.
Add job to subset if it is compatible with previously chosen jobs.
values
Recall. Greedy algorithm is correct if all weights are 1. 


Observation. Greedy algorithm fails spectacularly for weighted version.
a
b
h
weight = 999 weight = 1
0 1 2 3 4 5 6 7 8 9 10 11
time
7

Weighted interval scheduling
Notation. Label jobs by finishing time: f1 ≤ f2 ≤ . . . ≤ fn . 

Def. p ( j ) = largest index i < j such that job i is compatible with j.
 Ex. p(8) = 5, p(7) = 3, p(2) = 0. 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8 9 10 11 time 8 Dynamicprogramming: binarychoice OPTCO) Notation. OPT(j) = value of optimal solution to the problem consisting of
 job requests 1, 2, ..., j. 
 C・ase 1. OPT selects job j. ・Collect profit vj. Vj ・Can't use incompatible jobs { p(j) + 1, p(j) + 2, ..., j – 1 }. Must include optimal solution to problem consisting of remaining compatible jobs 1, 2, ..., p(j). 
 optimal substructure property (proof via exchange argument) C・ase 2. OPT does not select job j. Must include optimal solution to problem consisting of remaining compatible jobs 1, 2, ..., j – 1. OPT(j)=#$ 0 if j=0 %max{vj +OPT(p(j)), OPT(j−1)} otherwise O to p tlj - l ) =o j = 0 112 , N -r- ,i E TC plj) ) # jobs t Op 9 € Weighted interval scheduling: brute force (n ( (Input: n, s[1..n], f[1..n], v[1..n] Sort jobs by finish time so that f[1] ≤ f[2] ≤ ... ≤ f[n]. cost : O log Cnl Compute p[1], p[2], ..., p[n]. ← Compute-Opt(j) if j = 0 return 0. else e xe rcise : how to do this efficiently return max(v[j] + Compute-Opt(p[j]), Compute-Opt(j–1)). 10 Weighted interval scheduling: brute force Observation. Recursive algorithm fails spectacularly because of redundant subproblems ⇒ exponential algorithms. 
 Ex. Number of recursive calls for family of "layered" instances grows like Fibonacci sequence. 5 43 3221 211010 recursion tree 1 2 3 4 5 p(1) = 0, p(j) = j-2 10 11 work to do when calling Optcj) (in example on last slide) I 1-(jl Tco) 1-(j) = - J number Ol234567 30 2,178,309 50 32,9I51,280,099 =L Tcl) = Not+TCO)=L 1-(21 = TCO) ttc,) =3 Y = golden ratio 1-(31= Till+Th) Tall 1-(j) I 2 B 5 8 13 21 34 ifi→a Fl;)-1 Fs = = TC3 ) TTC 21=8 (j + 2)th Fibonacci = .... f- (j t 2) Fc;,=-9"-f-Ice)" or 47 /. 5 6 -- - Weighted interval scheduling: brute force Observation. Recursive algorithm fails spectacularly because of redundant subproblems ⇒ exponential algorithms. 
 Ex. Number of recursive calls for family of "layered" instances grows like Fibonacci sequence. 5 43 3221 211010 recursion tree 1 2 3 4 5 p(1) = 0, p(j) = j-2 10 11 Weighted interval scheduling: memoization Memoization. Cache results of each subproblem; lookup as needed. Input: n, s[1..n], f[1..n], v[1..n] Sort jobs by finish time so that f[1] ≤ f[2] ≤ ... ≤ f[n]. Compute p[1], p[2], ..., p[n]. for j = 1 to n M[j] ← empty. M[0] ← 0. M-Compute-Opt(j) if M[j] is empty global array M[j] ← max(v[j] + M-Compute-Opt(p[j]), M-Compute-Opt(j – 1)). return M[j]. 12 Weighted interval scheduling: running time Claim. Memoized version of algorithm takes O(n log n) time. ・Sort by finish time: O(n log n). ・Computing p(⋅) : O(n log n) via sorting by start time.
 ・M-COMPUTE-OPT(j): each invocation takes O(1) time and either - (i) returns an existing value M[j] - (ii) fills in one new entry M[j] and makes two recursive calls
 ・Progress measure Φ = # nonempty entries of M[]. - initially Φ = 0, throughout Φ ≤ n. - (ii) increases Φ by 1 ⇒ at most 2n recursive calls.
 ・Overall running time of M-COMPUTE-OPT(n) is O(n). ▪ 
 Remark. O(n) if jobs are presorted by start and finish times. 13 *8 4'5 ' 6 ¥4 5 X 4 ✓ 30^20 X ← (prune b/c of memorization ) ✓ v § ( prune memorization ) b/c of Weighted interval scheduling: finding a solution Q. DP algorithm computes optimal value. How to find solution itself? A. Make a second pass. 
 
 
 
 
 
 
 
 
 
 
 
 
 Find-Solution(j) if j = 0 return ∅. else if (v[j] + M[p[j]] > M[j–1])
return { j } ∪ Find-Solution(p[j]). else
return Find-Solution(j–1).
Analysis. # of recursive calls ≤ n ⇒ O(n).
14

Weighted interval scheduling: bottom-up
Bottom-up dynamic programming. Unwind recursion.
BOTTOM-UP (n, s1, …, sn , f1, …, fn , v1, …, vn) _________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
Sort jobs by finish time so that f1 ≤ f2 ≤ … ≤ fn. Compute p(1), p(2), …, p(n).
M[0] ← 0.
FOR j = 1 TO n
M[j] ← max { vj + M[p(j)], M[j–1] }. _________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
Exercise
: can you solve problem
using jobs sorted by start
time
?
15

6. DYNAMIC PROGRAMMING I
‣ weighted interval scheduling ‣ segmented least squares
‣ knapsack problem
‣ RNA secondary structure

Least squares
L・east squares. Foundational problem in statistics. ・Given n points in the plane: (x1, y1), (x2, y2) , …, (xn, yn).

 
 
 
 
 
 
 
 

Find a line y = ax + b that minimizes the sum of the squared error:
y
SSE=∑n (yi−axi−b)2 i =1

Solution. Calculus ⇒ min error is achieved when
a=n∑ixiyi −(∑ixi)(∑i yi), b=∑i yi −a∑ixi n∑ixi2 −(∑ixi)2 n
x
17

Segmented least squares
S・egmented least squares.
・Points lie roughly on a sequence of several line segments.
Given n points in the plane: (x1, y1), (x2, y2) , …, (xn, yn) with
 x1 < x2 < ... < xn, find a sequence of lines that minimizes f (x). 
 Q. What is a reasonable choice for f (x) to balance accuracy and parsimony? goodness of fit number of lines y x 18 Segmented least squares Given n points in the plane: (x1, y1), (x2, y2) , ..., (xn, yn) with x1 < x2 < ... < xn and a c・onstant c > 0, find a sequence of lines that minimizes f (x) = E + c L:
・E = the sum of the sums of the squared errors in each segment. L = the number of lines.
y
x
19

Dynamic programming: multiway choice
N・otation.
・OPT(j) = minimum cost for points p1, p2, …, pj.
e(i, j) = minimum sum of squares for points pi, pi+1, …, pj. 

T・o compute OPT(j):
・Last segment uses points pi, pi+1, …, pj for some i.
Cost = e(i, j) + c + OPT(i – 1). optimal substructure property (proof via exchange argument)

$ 0 if j=0
OPT(j)=&%min{e(i,j)+c+OPT(i−1)} otherwise &’ 1 ≤ i ≤ j
20

Segmented least squares algorithm
SEGMENTED-LEAST-SQUARES (n, p1, …, pn , c) __________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
FOR j = 1 TO n FOR i = 1 TO j
Compute the least squares e(i, j) for the segment pi, pi+1, …, pj.
M[0] ← 0. FOR j = 1 TO n
M [ j ] ← min 1 ≤ i ≤ j { eij + c + M [ i – 1] }.
RETURN M[n]. __________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
21

Segmented least squares analysis
Theorem. [Bellman 1961] The dynamic programming algorithm solves the segmented least squares problem in O(n3) time and O(n2) space.

P・f.
・Bottleneck = computing e(i, j) for O(n2) pairs. O(n) per pair using formula. ▪



a=n∑ixiyi −(∑ixi)(∑i yi), b=∑i yi −a∑ixi n∑ixi2 −(∑ixi)2 n




Remark. Can be improved to O(n2) time and O(n) space by precomputing various statistics. How?
22

6. DYNAMIC PROGRAMMING I
‣ weighted interval scheduling ‣ segmented least squares
‣ knapsack problem
‣ RNA secondary structure

Knapsack problem
・Given n objects and a “knapsack.” ・Item i weighs wi > 0 and has value vi > 0. ・Knapsack has capacity of W.
=5
i vi wi
11 22 35 46 5287
knapsack instance (weight limit W = 11)
h
1
6
18
22
Ex. Ex. Ex. 

Goal: fill knapsack so as to maximize total value. { 1, 2, 5 } has value 35.
{ 3, 4 } has value 40.
{ 3, 5 } has value 46 (but exceeds weight limit).

 
 

Greedy by value. Repeatedly add item with maximum vi.
Greedy by weight. Repeatedly add item with minimum wi. Greedy by ratio. Repeatedly add item with maximum ratio vi / wi.
 

Observation. None of greedy algorithms is optimal.
24

Dynamic programming: false start
Def. OPT(i) = max profit subset of items 1, …, i. 

Case 1. OPT does not select item i. ・OPT selects best of { 1, 2, …, i – 1 }.

optimal substructure property (proof via exchange argument)
Case 2. OPT selects item i.
・Selecting item i does not immediately imply that we will have to reject ・other items.
Without knowing what other items were selected before i,
 we don’t even know if we have enough room for i.

 

Conclusion. Need more subproblems!
25


Dynamic programming: adding a new variable
fall
problem : OPT
(n, W)
Def. OPT(i, w) = max profit subset of items 1, …, i with weight limit w. 

C・ase 1. OPT does not select item i.
OPT selects best of { 1, 2, …, i – 1 } using weight limit w.

 OPT(i- I, w/
C・ase 2. OPT selects item i. ・New weight limit = w – wi.
optimal substructure property (proof via exchange argument)
OPT selects best of { 1, 2, …, i – 1 } using this new weight limit.
( i –
# 0 OPT(i,w)=%$OPT(i−1,w)
I ,
w
Wi )
OPT

if wi >w %&max{OPT(i−1,w), vi + OPT(i−1,w−wi)} otherwise
if i=0
26

limit Mci, w
KNAPSACK (n, W, w1, …, wn, v1, …, vn ) __________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
FOR w = 0 TO W M [0, w] ← 0.
FOR i = 1 TO n
FOR w = 0 TO W
IF (wi > w) M [ i, w ] ← M [ i – 1,-w]. –
ELSE M [ i, w ] ← max { M [ i – 1, w], vi + M [ i – 1, w – wi] }.
include i
Knapsack problem: bottom-up , item. )
weight
exclude i
RETURN M[n, W]. __________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
to store W is Llogzw) exponential we make table of size in – W Z W ←
needed to store W
#
bits
i n # bits
27

Knapsack problem: bottom-up demo
i vi wi
11
22
35
46
5 28 7

# 0 OPT(i,w)=%$OPT(i−1,w)
if i=0
1
6
18
22
include l
3
if wi >w %&max{OPT(i−1,w), vi + OPT(i−1,w−wi)} otherwise
weight limit w
{}
{1 }
{ 1, 2 }
{ 1, 2, 3 }
{ 1, 2, 3, 4 }
{ 1, 2, 3, 4, 5 }
0
0
0
.
0
0
0
0
.

16
1 ll 0l0
1I1
1
1
1

2
6
6
6
3
0
1
7
7
7
7
4
0
1
7
7
7
7
5
0
1
7

18
18
18


19
6
0
1
7
22
22
7
0
1
7

24
24
28
8
0
1
7

25
28
29
9
0
1
7

25
29
34
10
0
1
7
.-
25
29
35
11

0
1
7
25
40 . – .
40
subset
 of items 1, …, i


include 4
exc luck 5
r
OPT(i, w) = max profit subset of items 1, …, i with weight limit w.
28

Knapsack problem: running time
Theorem. There exists an algorithm to solve the knapsack problem with n items and maximum weight W in Θ(n W) time and Θ(n W) space.
Pf. weights are integers ・Takes O(1) time per table entry. between 1 and W
・There are Θ(n W) table entries.
・After computing optimal values, can trace back to find solution:

> M [i – 1, w]. ▪
R・emarks.
・Not polynomial in input size!
・Decision version of knapsack problem is NP-COMPLETE. [ CHAPTER 8 ]
There exists a poly-time algorithm that produces a feasible solution that has value within 1% of optimum. [ SECTION 11.8 ]

 
 

take item i in OPT(i, w) iff M [i, w]
“pseudo-polynomial”
29

6. DYNAMIC PROGRAMMING I
‣ weighted interval scheduling ‣ segmented least squares
‣ knapsack problem
‣ RNA secondary structure

RNA secondary structure
RNA. String B = b1b2…bn over alphabet { A, C, G, U }.
Secondary structure. RNA is single-stranded so it tends to loop back and form base pairs with itself. This structure is essential for understanding behavior of molecule.
CA AA AU
GC CGUAA G
UGAUUA
ACGCU CGCGAGC
G
G
AU
RNA secondary structure for GUCGAUUGAGCGAAUGUAACAACGUGGCUACGGCGAGA
31
G
G

RNA secondary structure
S・econdary structure. A set of pairs S = { (bi, bj) } that satisfy: [Watson-Crick] S is a matching and each pair in S is a Watson-Crick
・complement: A–U, U–A, C–G, or G–C.
[No sharp turns] The ends of each pair are separated by at least 4
・intervening bases. If (bi, bj) ∈ S, then i < j – 4. [Non-crossing] If (b , b ) and (b , bl) are two pairs in S, then we cannot have i < k < j