Potes enim videre in hac margine, qualiter hoc operati fuimus, scilicet quod iunximus primum numerum cum secundo, videlicet 1 cum 2; et secundum cum tercio; et tercium cum quarto; et quartum cum quinto, et sic deinceps. . . .
[You can see in the margin here how we have worked this; clearly, we combined the first number with the second, namely 1 with 2, and the second with the third, and the third with the fourth, and the fourth with the fifth, and so forth. . . .]
— Leonardo Pisano, Liber Abaci (1202) Those who cannot remember the past are condemned to repeat it.
— Jorge Agustín Nicolás Ruiz de Santayana y Borrás, The Life of Reason, Book I: Introduction and Reason in Common Sense (1905)
You know what a learning experience is?
A learning experience is one of those things that says, “You know that thing you just did? Don’t do that.”
3.1 Ma ̄tra ̄vr.tta
— Douglas Adams, The Salmon of Doubt (2002)
3 Dynamic Programming
One of the earliest examples of recursion arose in India more than 2000 years ago, in the study of poetic meter, or prosody. Classical Sanskrit poetry distinguishes between two types of syllables (ak.sara): light (laghu) and heavy (guru). In one class of meters, variously called ma ̄tra ̄vr. tta or ma ̄tra ̄meru or ma ̄tra ̄chanda. , each line of poetry consists of a fixed number of “beats” (ma ̄tra ̄), where each light syllable lasts one beat and each heavy syllable lasts two beats. The formal study of ma ̄tra ̄-vr. tta dates back to the Chandah. śa ̄stra, written by the scholar Pin ̇gala between 600bce and 200bce. Pin ̇gala observed that there are exactly five 4-beat meters: ——, —••, •—•, ••—, and ••••. (Here each “—“ represents a long syllable and each “•” represents a short syllable.)1
1In Morse code, a “dah” lasts three times as long as a “dit”, but each “dit” or “dah” is followed by a pause with the same duration as a “dit”. Thus, each “dit-pause” is a laghu ak.sara, each
1
3. DYNAMICPROGRAMMING
Although Pin ̇gala’s text hints at a systematic rule for counting meters with a given number of beats,2 it took about a millennium for that rule to be stated explicitly. In the 7th century ce, another Indian scholar named Viraha ̄n. ka wrote a commentary on Pin ̇gala’s work, in which he observed that the number of meters with n beats is the sum of the number of meters with (n − 2) beats and the number of meters with (n − 1) beats. In more modern notation, Viraha ̄n. ka’s observation implies a recurrence for the total number M(n) of n-beat meters:
M(n) = M(n − 2) + M(n − 1)
It is not hard to see that M(0) = 1 (there is only one empty meter) and M(1) = 1 (the only one-beat meter consists of a single short syllable).
The same recurrence reappeared in Europe about 500 years after Viraha ̄n. ka, in Leonardo Pisano’s 1202 treatise Liber Abaci, one of the most influential early European works on “algorism”. In full compliance with Stigler’s Law of Eponymy,3 the modern Fibonacci numbers are defined using Viraha ̄n.ka’s recurrence, but with different base cases:
0 Fn= 1
Fn−1 + Fn−2 In particular, we have M(n) = Fn+1 for all n.
Backtracking Can Be Slow
if n = 0 ifn=1 otherwise
The recursive definition of Fibonacci numbers immediately gives us a recur- sive algorithm for computing them. Here is the same algorithm written in pseudocode:
“dah-pause” is a guru ak.sara, and there are exactly five letters (M, D, R, U, and H) whose codes last four ma ̄tra ̄.
2The Chandah. śa ̄stra contains two systematic rules for listing all meters with a given number of syllables, which correspond roughly to writing numbers in binary from left to right (like Greeks) or from right to left (like Egyptians). The same text includes a recursive algorithm to compute 2n (the number of meters with n syllables) by repeated squaring, and (arguably) a recursive algorithm to compute binomial coefficients (the number of meters with k short syllables and n syllables overall).
3“No scientific discovery is named after its original discoverer.” In his 1980 paper that gives the law its name, the statistician Stephen Stigler jokingly claimed that this law was first proposed by sociologist Robert K. Merton. However, similar statements were previously made by Vladimir Arnol’d in the 1970’s (“Discoveries are rarely attributed to the correct person.”), Carl Boyer in 1968 (“Clio, the muse of history, often is fickle in attaching names to theorems!”), Alfred North Whitehead in 1917 (“Everything of importance has been said before by someone who did not discover it.”), and even Stephen’s father George Stigler in 1966 (“If we should ever encounter a case where a theory is named for the correct man, it will be noted.”). We will see many other examples of Stigler’s law in this book.
2
3.1. Ma ̄tra ̄vr.tta
RecFibo(n): if n = 0
return 0 else if n = 1
return n else
return RecFibo(n − 1) + RecFibo(n − 2)
Unfortunately, this naive recursive algorithm is horribly slow. Except for the recursive calls, the entire algorithm requires only a constant number of steps: one comparison and possibly one addition. Let T(n) denote the number of recursive calls to RecFibo; this function satisfies the recurrence
T(0)=1, T(1)=1, T(n)=T(n−1)+T(n−2)+1,
which looks an awful lot like the recurrence for Fibonacci numbers them-
selves! Writing out the first several values of T(n) suggests the closed-form
solution T(n) = 2Fn+1 − 1, which we can verify by induction (hint, hint). So
computing Fn using this algorithm takes about twice as long as just counting
to Fn. Methods beyond the scope of this book4 imply that Fn = Θ(φn), where
φ = ( 5 + 1)/2 ≈ 1.61803 is the so-called golden ratio. In short, the running time of this recursive algorithm is exponential in n.
We can actually see this exponential growth directly as follows. Think of the recursion tree for RecFibo as a binary tree of additions, with only 0s and 1s at the leaves. Since the eventual output is Fn, exactly Fn of the leaves must have value 1; these leaves represent the calls to RecRibo(1). An easy inductive argument (hint, hint) implies that RecFibo(0) is called exactly Fn−1 times. (If we just want an asymptotic bound, it’s enough to observe that the number of calls to RecFibo(0) is at most the number of calls to RecFibo(1).) Thus, the recursion tree has exactly Fn + Fn−1 = Fn+1 = O(Fn) leaves, and therefore, because it’s a full binary tree, 2Fn+1 − 1 = O(Fn) nodes altogether.
Memo(r)ization: Remember Everything
The obvious reason for the recursive algorithm’s lack of speed is that it com- putes the same Fibonacci numbers over and over and over. A single call to RecFibo(n) results in one recursive call to RecFibo(n − 1), two recursive calls to RecFibo(n − 2), three recursive calls to RecFibo(n − 3), five recursive calls to RecFibo(n − 4), and in general Fk−1 recursive calls to RecFibo(n − k) for any integer 0 ≤ k < n. Each call is recomputing some Fibonacci number from scratch.
We can speed up our recursive algorithm considerably just by writing down the results of our recursive calls and looking them up again if we need them later.
4See http://algorithms.wtf for notes on solving recurrences.
3
3. DYNAMICPROGRAMMING
F7
F6
F5 F4 F4 F3
F3 F3 F2 F3 F2 F2F1
F2 F1 F2 F1 F1 F0 F2 F1 F1 F0 F1 F0
F1 F0 F1 F0 F1 F0
F5
F4
F3
F2
F2 F1 F1 F0
F1 F0
Figure 3.1. The recursion tree for computing F7; arrows represent recursive calls.
This optimization technique, now known as memoization, is usually credited to Donald Michie in 1967, but essentially the same technique was proposed in 1959 by Arthur Samuel.5
MemFibo(n): if n = 0
return 0 else if n = 1
return 1 else
if F[n] is undefined
F [n] ← MemFibo(n − 1) + MemFibo(n − 2)
return F[n]
Memoization clearly decreases the running time of the algorithm, but by how much? If we actually trace through the recursive calls made by MemFibo, we find that the array F[] is filled from the bottom up: first F[2], then F[3], and so on, up to F[n]. This pattern can be verified by induction: Each entry F[i] is filled only after its predecessor F[i − 1]. If we ignore the time spent in recursive calls, it requires only constant time to evaluate the recurrence for each Fibonacci number Fi . But by design, the recurrence for Fi is evaluated only once for each index i. We conclude that MemFibo performs only O(n) additions, an exponential improvement over the naïve recursive algorithm!
5Michie proposed that programming languages should support an abstraction he called a “memo function”, consisting of both a standard function (“rule”) and a dictionary (“rote”), instead of separately supporting arrays and functions. Whenever a memo function computes a function value for the first time, it “memorises” (yes, with an R) that value into its dictionary. Michie was inspired by Samuel’s use of “rote learning” to speed up the recursive evaluation of checkers game trees; Michie describes his more general proposal as “enabling the programmer to ‘Samuelize’ any functions he pleases.” (As far as I can tell, Michie never actually used the term “memoisation”.) Memoization was used even earlier by Claude Shannon’s maze-solving robot “Theseus”, which
4
he designed and constructed in 1950.
F7
3.1. Ma ̄tra ̄vr.tta
F6
F5 F4 F4 F3
F3 F3 F2 F3 F2 F2F1
F2 F1 F2 F1 F1 F0 F2 F1 F1 F0 F1 F0
F1 F0 F1 F0 F1 F0
F5
F4
F3
F2
F2 F1 F1 F0
F1 F0
0
1
1
23581
3
Figure 3.2. The recursion tree for F7 trimmed by memoization. Downward green arrows indicate writing into the memoization array; upward red arrows indicate reading from the memoization array.
Dynamic Programming: Fill Deliberately
Once we see how the array F [ ] is filled, we can replace the memoized recurrence with a simple for-loop that intentionally fills the array in order, instead of relying on a more complicated recursive algorithm to do it for us accidentally.
Now the time analysis is immediate: IterFibo clearly uses O(n) additions and stores O(n) integers.
This is our first explicit dynamic programming algorithm. The dynamic programming paradigm was formalized and popularized by Richard Bellman in the mid-1950s, while working at the RAND Corporation, although he was far from the first to use the technique. In particular, this iterative algorithm for Fibonacci numbers was already proposed by Viraha ̄n. ka and later Sanskrit prosodists in the 12th century, and again by Fibonacci at the turn of the 13th century!6
6More general dynamic programming techniques were independently deployed several times in the late 1930s and early 1940s. For example, Pierre Massé used dynamic programming algorithms to optimize the operation of hydroelectric dams in France during the Vichy regime. John von Neumann and Oskar Morgenstern developed dynamic programming algorithms to determine the winner of any two-player game with perfect information (for example, checkers). Alan Turing and his cohorts used similar methods as part of their code-breaking efforts at
IterFibo(n): F[0]←0 F[1]←1
for i ← 2 to n
F[i] ← F[i − 1] + F[i − 2]
return F[n]
5
3. DYNAMICPROGRAMMING
Many years after the fact, Bellman claimed that he deliberately chose the name “dynamic programming” to hide the mathematical character of his work from his military bosses, who were actively hostile toward anything resembling mathematical research.7 The word “programming” does not refer to writing code, but rather to the older sense of planning or scheduling, typically by filling in a table. For example, sports programs and theater programs are schedules of important events (with ads); television programming involves filling each available time slot with a show (and ads); degree programs are schedules of classes to be taken (with ads). The Air Force funded Bellman and others to develop methods for constructing training and logistics schedules, or as they called them, “programs”. The word “dynamic” was not only a reference to the multistage, time-varying processes that Bellman and his colleagues were attempting to optimize, but also a marketing buzzword that would resonate withtheFuturisticCan-DoZeitgeistTMofpost-WWIIAmerica.8 Thanksinpart to Bellman’s proselytizing, dynamic programming is now a standard tool for multistage planning in economics, robotics, control theory, and several other disciplines.
Don’t Remember Everything After All
In many dynamic programming algorithms, it is not necessary to retain all intermediate results through the entire computation. For example, we can significantly reduce the space requirements of our algorithm IterFibo by maintaining only the two newest elements of the array:
Bletchley Park. Both Massé’s work and von Neumann and Mergenstern’s work were first published in 1944, six years before Bellman coined the phrase “dynamic programming”. The details of Turing’s “Banburismus” were kept secret until the mid-1980s.
7Charles Erwin Wilson became Secretary of Defense in January 1953, after a dozen years as the president of General Motors. “Engine Charlie” reorganized the Department of Defense and significantly decreased its budget in his first year in office, with the explicit goal of running the Department much more like an industrial corporation. Bellman described Wilson in his 1984 autobiography as follows:
We had a very interesting gentleman in Washington named Wilson. He was secretary of Defense, and he actually had a pathological fear and hatred of the word “research”. I’m not using the term lightly; I’m using it precisely. His face would suffuse, he would turn red, and he would get violent if people used the term “research” in his presence. You can imagine how he felt, then, about the term “mathematical”. . . . I felt I had to do something to shield Wilson and the Air Force from the fact that I was really doing mathematics inside the RAND Corporation. What title, what name, could I choose?
However, Bellman’s first published use of the term “dynamic programming” already appeared in 1952, several months before Wilson took office, so this story is at least slightly embellished.
8. . . and just possibly a riff on the iconic brand name “Dynamic-Tension” for Charles Atlas’s famous series of exercises, which Charles Roman coined in 1928. Hero of the Beach!
6
IterFibo2(n): prev←1 curr←0
for i ← 1 to n
next ← curr + prev
prev ← curr
curr ← next return curr
(This algorithm uses the non-standard but consistent base case F−1 = 1 so that IterFibo2(0) returns the correct value 0.) Although saving space can be absolutely crucial in practice, we won’t focus on space issues in this book.
n3.2 Aside: Even Faster Fibonacci Numbers
Although the previous algorithm is simple and attractive, it is not the fastest algorithm to compute Fibonacci numbers. We can derive a faster algorithm by exploiting the following matrix reformulation of the Fibonacci recurrence:
01x y 11 y=x+y
In other words, multiplying a two-dimensional vector by the matrix 0 1 has 11
exactly the same effect as one iteration of the inner loop of IterFibo2. It follows that multiplying by the matrix n times is the same as iterating the loop n times:
0 1n 1 Fn−1 110=F.
n
So if we want the nth Fibonacci number, we only need to compute the nth
power of the matrix 0 1. If we use repeated squaring,9 computing the nth 11
power of something requires only O(log n) multiplications. In this case, that means O(log n) 2 × 2 matrix multiplications, each of which reduces to a constant number of integer multiplications and additions. Thus, we can compute Fn in only O(log n) integer arithmetic operations.
We can achieve the same speedup using the identity Fn = FmFn−m−1 + Fm+1Fn−m, which holds (by induction!) for all integers m and n. In particular, this identity implies the following mutual recurrence for pairs of adjacent Fibonacci numbers:
F2n−1 = F2 + F2 n−1 n
F2n = Fn(Fn−1 + Fn+1) = Fn(2Fn−1 + Fn) 9as suggested by Pin ̇gala for powers of 2 elsewhere in Chandah. śa ̄stra
n3.2. Aside: Even Faster Fibonacci Numbers
7
3. DYNAMICPROGRAMMING
(We can also derive this mutual recurrence directly from the matrix-squaring algorithm.) These recurrences translate directly into the following algorithm:
〈〈Compute the pair Fn−1, Fn〉〉 FastRecFibo(n) :
if n = 1
return 0, 1
m ← ⌊n/2⌋
hprv,hcur ← FastRecFibo(m)
prev ← hprv2 + hcur2
curr ← hcur · (2 · hprv + hcur) next ← prev + curr
if n is even
return prev, curr
else
return curr, next
〈〈Fm−1, Fm〉〉 〈〈F2m−1〉〉
〈〈F2m〉〉 〈〈F2m+1〉〉
8
Our standard recursion tree technique implies that this algorithm performs only O(log n) integer arithmetic operations.
This is an exponential speedup over the standard iterative algorithm, which was already an exponential speedup over our original recursive algorithm. Right?
Whoa! Not so fast!
Well, not exactly. Fibonacci numbers grow exponentially fast. The nth Fibonacci number is approximately n log10 φ ≈ n/5 decimal digits long, or n log2 φ ≈ 2n/3 bits. So we can’t possibly compute Fn in logarithmic time — we need Ω(n) time just to write down the answer!
The way out of this apparent paradox is to observe that we can’t perform arbitrary-precision arithmetic in constant time. Let M(n) denote the time required to multiply two n-digit numbers. The running time of FastRecFibo satisfies the recurrence T(n) = T(⌊n/2⌋) + M(n), which solves to T(n) = O(M(n)) via recursion trees. The fastest integer multiplication algorithm known (as of 2018) runs in time O(n log n 4log∗ n), so that is also the running time of the fastest algorithm known (as of 2018) to compute Fibonacci numbers.
Is this algorithm slower than our “linear-time” iterative algorithms? Actually, no! Addition isn’t free, either! Adding two n-digit numbers takes O(n) time, so the running time of the iterative algorithms IterFibo and IterFibo2 is O(n2). (Do you see why?) So FastRecFibo is significantly faster than the iterative algorithms, just not exponentially faster.
In the original recursive algorithm, the extra cost of arbitrary-precision arithmetic is overwhelmed by the huge number of recursive calls. The correct
recurrence is T(n) = T(n − 1) + T(n − 2) + O(n), which still has the solution T(n)=O(φn).
3.3 Interpunctio Verborum Redux
For our next dynamic programming algorithm, let’s consider the text segmenta- tion problem from the previous chapter. We are given a string A[1 .. n] and a subroutine IsWord that determines whether a given string is a word (whatever that means), and we want to know whether A can be partitioned into a sequence of words.
We solved this problem by defining a function Splittable(i) that returns True if and only if the suffix A[i .. n] can be partitioned into a sequence of words. We need to compute Splittable(1). This function satisfies the recurrence
True if i > n
Splittable(i) = n
IsWord(i,j) ∧ Splittable(j+1) otherwise
j=i
where IsWord(i, j) is shorthand for IsWord(A[i .. j]). This recurrence translates directly into a recursive backtracking algorithm that calls the IsWord subroutine O(2n) times in the worst case.
But for any fixed string A[1..n], there are only n different ways to call the recursive function Splittable(i)—one for each value of i between 1 and n + 1—and only O(n2) different ways to call IsWord(i, j)—one for each pair (i, j) such that 1 ≤ i ≤ j ≤ n. Why are we spending exponential time computing only a polynomial amount of stuff?
Each recursive subproblem is specified by an integer between 1 and n + 1, so we can memoize the function Splittable into an array SplitTable[1 .. n + 1]. Each subproblem Splittable(i) depends only on results of subproblems Splittable(j) where j > i, so the memoized recursive algorithm fills the array in decreasing index order. If we fill the array in this order deliberately, we obtain the dynamic programming algorithm shown in Figure 3.3. The algorithm makes O(n2) calls to IsWord, an exponential improvement over our earlier backtracking algorithm.
3.4 The Pattern: Smart Recursion
In a nutshell, dynamic programming is recursion without repetition. Dynamic programming algorithms store the solutions of intermediate subproblems, often but not always in some kind of array or table. Many algorithms students
3.3. Interpunctio Verborum Redux
9
3. DYNAMICPROGRAMMING
Splittable(A[1 .. n]): SplitTable[n + 1] ← True
for i ← n down to 1 SplitTable[i] ← False
for j ← i to n
if IsWord(i, j) and SplitTable[ j + 1]
SplitTable[i] ← True return SplitTable[1]
10
Figure 3.3. Interpunctio verborum velox
(and instructors, and textbooks) make the mistake of focusing on the table— because tables are easy and familiar—instead of the much more important (and difficult) task of finding a correct recurrence. As long as we memoize the correct recurrence, an explicit table isn’t really necessary, but if the recurrence is incorrect, we are well and truly hosed.
Dynamic programming algorithms are almost always developed in two distinct stages.
1. Formulate the problem recursively. Write down a recursive formula or algorithm for the whole problem in terms of the answers to smaller subproblems. This is the hard part. A complete recursive formulation has two parts:
(a) Specification. Describe the problem that you want to solve recursively, in coherent and precise English—not how to solve that problem, but what problem you’re trying to solve. Without this specification, it is impossible, even in principle, to determine whether your solution is correct.
(b) Solution. Give a clear recursive formula or algorithm for the whole problem in terms of the answers to smaller instances of exactly the same problem.
2. Build solutions to your recurrence from the bottom up. Write an algo- rithm that starts with the base cases of your recurrence and works its way up to the final solution, by considering intermediate subproblems in the correct order. This stage can be broken down into several smaller, relatively mechanical steps:
(a) Identify the subproblems. What are all the different ways your re- cursive algorithm can call itself, starting with some initial input? For example, the argument to RecFibo is always an integer between 0 and n.
Dynamic programming is not about filling in tables. It’s about smart recursion!
(b) Choose a memoization data structure. Find a data structure that can store the solution to every subproblem you identified in step (a). This is usually but not always a multidimensional array.
(c) Identify dependencies. Except for the base cases, every subproblem depends on other subproblems—which ones? Draw a picture of your data structure, pick a generic element, and draw arrows from each of the other elements it depends on. Then formalize your picture.
(d) Find a good evaluation order. Order the subproblems so that each one comes after the subproblems it depends on. You should consider the base cases first, then the subproblems that depends only on base cases, and so on, eventually building up to the original top-level problem. The dependencies you identified in the previous step define a partial order over the subproblems; you need to find a linear extension of that partial order. Be careful!
(e) Analyze space and running time. The number of distinct subproblems determines the space complexity of your memoized algorithm. To compute the total running time, add up the running times of all possible subproblems, assuming deeper recursive calls are already memoized. You can actually do this immediately after step (a).
(f) Write down the algorithm. You know what order to consider the subproblems, and you know how to solve each subproblem. So do that! If your data structure is an array, this usually means writing a few nested for-loops around your original recurrence, and replacing the recursive calls with array look-ups.
Of course, you have to prove that each of these steps is correct. If your recurrence is wrong, or if you try to build up answers in the wrong order, your algorithm won’t work!
3.5 Warning: Greed is Stupid
If we’re incredibly lucky, we can bypass all the recurrences and tables and so forth, and solve the problem using a greedy algorithm. Like a backtracking algorithm, a greedy algorithm constructs a solution through a series of decisions, but it makes those decisions directly, without solving at any recursive subproblems. While this approach seems very natural, it almost never works; optimization problems that can be solved correctly by a greedy algorithm are quite rare. Nevertheless, for many problems that should be solved by backtracking or dynamic programming, many students’ first intuition is to apply a greedy strategy.
For example, a greedy algorithm for the longest increasing subsequence problem might look for the smallest element of the input array, accept that
3.5. Warning: Greed is Stupid
11
3. DYNAMICPROGRAMMING
element as the start of the target subsequence, and then recursively look for the longest increasing subsequence to the right of that element. If this sounds like a stupid hack to you, pat yourself on the back. It isn’t even close to a correct solution.
Everyone should tattoo the following sentence on the back of their hands, right under all the rules about logarithms and big-Oh notation:
What, never?
No, never!
What, never?
Well. . . hardly ever.10
Because the greedy approach is so incredibly tempting, but so rarely correct,
I strongly advocate the following policy in any algorithms course, even (or perhaps especially) for courses that do not normally ask for proofs of correctness.12
You will not receive any credit for any greedy algorithm, on any homework or exam, even if the algorithm is correct, without a formal proof of correctness.
Moreover, the vast majority of problems for which students are tempted to submit a greedy algorithm are actually best solved using dynamic programming. So I always offer the following advice to my algorithms students.
Whenever you write—or even think—the word “greeDY”, your subconscious is telling you to use DYnamic programming.
Even for problems that can be correctly solved by greedy algorithms, it’s usually more productive to develop a backtracking or dynamic programming algorithm first. First make it work; then make it fast. We will see techniques for proving greedy algorithms correct in the next chapter.
10Greedy methods hardly ever work! Then give three cheers, and one cheer more, for the careful Captain of the Pinafore! Then give three cheers, and one cheer more, for the Captain of the Pinafore!11
11I am a very good theoretical computer scientist; specifically, a geometric algorithm specialist.
12Introducing this policy in my own algorithms courses significantly improved students’ grades, because it significantly reduced the number of times they submitted incorrect greedy algorithms.
Greedy algorithms never work! Use dynamic programming instead!
12
0
LISbigger(i, j + 1)
LISbigger(i, j) = LISbigger(i, j + 1) max 1 + LISbigger( j, j + 1)
3.6. Longest Increasing Subsequence
3.6 Longest Increasing Subsequence
Another problem we considered in the previous chapter was computing the length of the longest increasing subsequence of a given array A[1 .. n] of numbers. We developed two different recursive backtracking algorithms for this problem. Both algorithms run in O(2n) time in the worst case; both algorithms can be sped up significantly via dynamic programming.
First Recurrence: Is This Next?
Our first backtracking algorithm evaluated the function LISbigger(i, j), which we defined as the length of the longest increasing subsequence of A[ j .. n] in which every element is larger than A[i]. We derived the following recurrence for this function:
if j > n
if A[i] ≥ A[ j]
otherwise
To solve the original problem, we can add a sentinel value A[0] = −∞ to the array and compute LISbigger(0, 1).
Each recursive subproblem is identified by two indices i and j, so there are only O(n2) distinct recursive subproblems to consider. We can memoize the re- sults of these subproblems into a two-dimensional array LISbigger[0 .. n, 1 .. n].13 Moreover, each subproblem can be solved in O(1) time, not counting recursive calls, so we should expect the final dynamic programming algorithm to run in O(n2) time.
The order in which the memoized recursive algorithm fills this array is not immediately clear; all we can tell from the recurrence is that each entry LISbigger[i, j] is filled in after the entries LISbigger[i, j+1] and LISbigger[j, j+1] in the next column, as indicated on the left in Figure 3.4.
Fortunately, this partial information is enough to give us a valid evaluation order. If we fill the table one column at a time, from right to left, then whenever we reach an entry in the table, the entries it depends on are already available. This may not be the order that the recursive algorithm would use, but it works, so we’ll go with it. The right figure in Figure 3.4 illustrates this evaluation order, with a double arrow indicating the outer loop and single arrows indicating the
13In fact, we only need half of this array, because we always have i < j. But even if we cared about constant factors in this class (we don’t), this would be the wrong time to worry about them. First make it work; then make it better.
13
3. DYNAMICPROGRAMMING
j
Figure 3.4. Subproblem dependencies for longest increasing subsequence, and a valid evaluation order
inner loop. In this case, the single arrows are bidirectional, because the order that we use to fill each column doesn’t matter.
And we’re done! Pseudocode for our dynamic programming algorithm is shown below; as expected, our algorithm clearly runs in O(n2) time. If necessary, we can reduce the space bound from O(n2) to O(n) by maintaining only the two most recent columns of the table, LISbigger[·, j] and LISbigger[·, j + 1].14
i
LIS(A[1 .. n]): A[0] ← −∞
for i ← 0 to n
LISbigger[i, n + 1] ← 0
for j←ndownto1 for i ← 0 to j − 1
〈〈Add a sentinel〉〉 〈〈Base cases〉〉
〈〈...or whatever〉〉 keep ← 1 + LISbigger[ j, j + 1]
skip ← LISbigger[i, j + 1]
if A[i] ≥ A[j]
LISbigger[i, j] ← skip
else
LISbigger[i, j] ← max{keep, skip} return LISbigger[0, 1]
14
Second Recurrence: What’s Next?
Our second backtracking algorithm evaluated the function LISfirst(i), which we defined as the length of the longest increasing subsequence of A[i .. n] that begins with A[i]. We derived the following recurrence for this function:
LISfirst(i)=1+maxLISfirst(j) j>iandA[j]>A[i]
Here, we assume that max ∅ = 0, so that the base cases like LISfirst(n) = 1 fall out of the recurrence automatically. To solve the original problem, we can add a sentinel value A[0] = −∞ to the array and compute LISfirst(0) − 1.
In this case, recursive subproblems are indicated by a single index i, so we can memoize the recurrence into a one-dimensional array LISfirst[1 .. n]. Each
14See, I told you not to worry about constant factors yet!
entry LISfirst[i] depends only on entries LISfirst[j] with j > i, so we can fill the array in decreasing index order. To compute each LISfirst[i], we need to consider LISfirst[j] for all indices j > i, but we don’t need to consider those indices j in any particular order. The resulting dynamic programming algorithm runs in O(n2) time and uses O(n) space.
3.7 Edit Distance
The edit distance between two strings is the minimum number of letter inser- tions, letter deletions, and letter substitutions required to transform one string into the other. For example, the edit distance between FOOD and MONEY is at most four:
FOOD → MOOD → MON∧D → MONED → MONEY
This distance function was independently proposed by Vladimir Levenshtein in 1965 (working on coding theory), Taras Vintsyuk in 1968 (working on speech recognition), and Stanislaw Ulam in 1972 (working with biological sequences). For this reason, edit distance is sometimes called Levenshtein distance or Ulam distance (but strangely, never “Vintsyuk distance”).
We can visualize this editing process by aligning the strings one above the other, with a gap in the first word for each insertion and a gap in the second word for each deletion. Columns with two different characters correspond to substitutions. In this representation, the number of editing steps is just the number of columns that do not contain the same character twice.
FOO D MONEY
It’s fairly obvious that we can’t transform FOOD into MONEY in three steps, so the edit distance between FOOD and MONEY is exactly four. Unfortunately, it’s not so easy in general to tell when a sequence of edits is as short as possible. For example, the following alignment shows that the distance between the strings ALGORITHM and ALTRUISTIC is at most 6. Is that the best we can do?
3.7. Edit Distance
LIS2(A[1 .. n]):
A[0] = −∞ 〈〈Add a sentinel〉〉
for i ← n downto 0 LISfirst[i] ← 1
for j ← i + 1 to n 〈〈…or whatever〉〉
if A[ j] > A[i] and 1 + LISfirst[ j] > LISfirst[i]
LISfirst[i] ← 1 + LISfirst[ j]
return LISfirst[0] − 1 〈〈Don’t count the sentinel〉〉
15
3. DYNAMICPROGRAMMING
Recursive Structure
ALGOR I THM AL TRUISTIC
To develop a dynamic programming algorithm to compute edit distance, we first need to develop a recurrence. Our alignment representation for edit sequences has a crucial “optimal substructure” property. Suppose we have the gap representation for the shortest edit sequence for two strings. If we remove the last column, the remaining columns must represent the shortest edit sequence for the remaining prefixes. We can easily prove this observation by contradiction: If the prefixes had a shorter edit sequence, gluing the last column back on would gives us a shorter edit sequence for the original strings. So once we figure out what should happen in the last column, the Recursion Fairy can figure out the rest of the optimal gap representation.
Said differently, the alignment we are looking for represents a sequence of editing operations, ordered (for no particular reason) from right to left. Solving the edit distance problem requires making a sequence of decisions, one for each column in the output alignment. In the middle of this sequence of decisions, we have already aligned a suffix of one string with a suffix of the other.
Because the cost of an alignment is just the number of mismatched columns, our remaining decisions don’t depend on the editing operations we’ve already chosen; the only depend on the prefixes we haven’t aligned yet.
Thus, for any two input strings A[1 .. m] and B[1 .. n], we can formulate the edit distance problem recursively as follows: For any indices i and j, let Edit(i, j) denote the edit distance between the prefixes A[1 .. i] and B[1 .. j]. We need to compute Edit(m, n).
Recurrence
When i and j are both positive, there are exactly three possibilities for the last column in the optimal alignment of A[1 .. i] and B[1 .. j]:
• Insertion: The last entry in the bottom row is empty. In this case, the edit distance is equal to Edit(i − 1, j) + 1. The +1 is the cost of the final insertion,
ALGOR ALTRU
I
T
H
M
I
S
T
I
C
16
ALGOR ALTRU
3.7. Edit Distance
and the recursive expression gives the minimum cost for the remaining alignment.
• Deletion: The last entry in the top row is empty. In this case, the edit distance is equal to Edit(i, j − 1) + 1. The +1 is the cost of the final deletion, and the recursive expression gives the minimum cost for the remaining alignment.
• Substitution: Both rows have characters in the last column. If these two characters are different, then the edit distance is equal to Edit(i−1, j−1)+1. If these two characters are equal, the substitution is free, so the edit distance is Edit(i − 1, j − 1).
This generic case analysis breaks down if either i = 0 or j = 0, but those boundary cases are easy to handle directly.
• Transforming the empty string into a string of length j requires j insertions, so Edit(0, j) = j.
• Transforming a string of length i into the empty string requires i deletions, so Edit(i, 0) = i.
As a sanity check, both of these base cases correctly indicate that the edit distance between the empty string and the empty string is zero!
We conclude that the Edit function satisfies the following recurrence:
ALGO ALTRU
R
ALGOR ALTR
U
ALGO ALTR
R U
ALGO ALT
R R
i j
Edit(i, j) = min
Dynamic Programming
Edit(i − 1, j) + 1
if j = 0 if i = 0
otherwise
Edit(i, j − 1) + 1
Edit(i − 1, j − 1) + [A[i] ̸= B[j]]
Now that we have a recurrence, we can transform it into a dynamic programming algorithm following our usual mechanical recipe.
• Subproblems: Each recursive subproblem is identified by two indices 0≤i≤mand0≤ j≤n.
17
3. DYNAMICPROGRAMMING
• Memoization structure: So we can memoize all possible values of Edit(i, j) in a two-dimensional array Edit[0 .. m, 0 .. n].
• Dependencies: Each entry Edit[i, j] depends only on its three neighboring entries Edit[i − 1, j], Edit[i, j − 1], and Edit[i − 1, j − 1].
• Evaluation order: If we fill this array in standard row-major order—row by row from top down, each row from left to right—then whenever we reach an entry in the array, all the entries it depends on are already available. (This isn’t the only evaluation order we could use, but it works, so let’s go with it.)
j
• Space and time: The memoization structure uses O(mn) space. We can compute each entry Edit[i, j] in O(1) time once we know its predecessors, so the overall algorithm runs in O(mn) time.
Here is the resulting dynamic programming algorithm:
i
EditDistance(A[1 .. m], B[1 .. n]): for j ← 0 to n
Edit[0, j] ← j
for i ← 1 to m Edit[i, 0] ← i
for j ← 1 to n
ins ← Edit[i − 1, j] + 1 del ← Edit[i, j − 1] + 1
if A[i] = B[j]
rep ← Edit[i − 1, j − 1]
else
rep ← Edit[i − 1, j − 1] + 1 Edit[i, j] ← min {ins, del, rep}
return Edit[m, n]
18
This algorithm is most commonly attributed to Robert Wagner and Michael Fischer, who described the algorithm in 1974. However, in full compliance with Stigler’s Law of Eponymy, either identical or more general algorithms were independently discovered by Taras Vintsyuk in 1968, V. M. Velichko and N. G. Zagoruyko in 1970, David Sankoff in 1972, Peter Sellers in 1974, and
3.7. Edit Distance
almost certainly several others.15 Interestingly, none of these authors cite either Levenshtein or Ulam!
The memoization table for the input strings ALGORITHM and ALTRUISTIC is shown below. Bold numbers indicate places where characters in the two strings are equal. The edit distance between ALGORITHM and ALTRUISTIC is indeed six!
ALGORITHM
A L T R U I S T I C
The arrows in this table indicate which predecessor(s) actually define each entry. Each direction of arrow corresponds to a different edit operation: horizontal=deletion, vertical=insertion, and diagonal=substitution. Bold red diagonal arrows indicate “free” substitutions of a letter for itself. Any path of arrows from the top left corner to the bottom right corner of this table represents an optimal edit sequence between the two strings. The example memoization array contains exactly three directed paths from the top left corner to the bottom right corner, each indicating a different sequence of six edits transforming ALGORITHM into ALTRUISTIC, as shown on the next page.
15This algorithm is sometimes also incorrectly attributed to Saul Needleman and Christian Wunsch in 1970. “The Needleman-Wunsch algorithm” more commonly refers to the standard dynamic programming algorithm for computing the longest common subsequence of two strings (or equivalently, the edit distance where only insertions and deletions are permitted) in O(mn) time, but that attribution is also incorrect! In fact, Needleman and Wunsch’s algorithm computes (weighted) longest common subsequences (possibly with gap costs) in O(m2n2) time, using a different recurrence. Sankoff explicitly describes his O(mn)-time algorithm as an improvement of Needleman and Wunsch’s algorithm.
0→1→2→3→4→5→6→7→8→9 ↓↘
1 0→1→2→3→4→5→6→7→8 ↓ ↓↘
2 1 0→1→2→3→4→5→6→7 ↓ ↓ ↓↘↘↘↘↘
3 2 1 1→2→3→4→4→5→6 ↓ ↓ ↓ ↓↘↘ ↘↘ 4 3 2 2 2 2→3→4→5→6 ↓ ↓ ↓↘↓↘↓↘↓↘↘↘↘ 5 4 3 3 3 3 3→4→5→6 ↓ ↓ ↓↘↓↘↓↘↓↘ ↘ ↘ ↘ 6 5 4 4 4 4 3→4→5→6 ↓ ↓ ↓↘↓↘↓↘↓ ↓↘↘↘ 7655554456 ↓ ↓ ↓↘↓↘↓↘↓ ↓↘ ↘ ↘ 8 7 6 6 6 6 5 4→5→6 ↓ ↓ ↓↘↓↘↓↘↓↘↓ ↓↘ ↘ 9 8 7 7 7 7 6 5 5→6 ↓ ↓ ↓↘↓↘↓↘↓ ↓ ↓↘↓↘
10 9 8 8 8 8 7 6 6 6
19
3. DYNAMICPROGRAMMING
ALGORI THM ALTRUISTIC
ALGOR I THM AL TRUISTIC
ALGOR I THM ALT RUISTIC
Our EditDistance algorithm does not actually compute or store any arrows in the table, but the arrow(s) leading into any entry in the table can be reconstructed on the fly in O(1) time from the numerical values. Thus, once we’ve filled in the table, we can reconstruct the shortest edit sequence in O(n+m) additional time.
3.8 Subset Sum
Recall that the Subset Sum problem asks, whether any subset of a given array X [1 .. n] of positive integers sums to a given integer T . In the previous chapter, we developed a recursive Subset Sum algorithm that can be reformulated as follows. Fix the original input array X [1 .. n] and the original target sum T , and define the boolean function
SS(i, t) = True if and only if some subset of X [i .. n] sums to t. This function satisfies the following recurrence:
True if t = 0 SS(i,t)= False ift<0ori>n
SS(i+1,t) ∨ SS(i+1,t−X[i]) otherwise
We can transform this recurrence into a dynamic programming algorithm
following the usual boilerplate.
• Subproblems: Each subproblem is described by an integer i such that 1 ≤ i ≤ n + 1, and an integer t ≤ T . However, subproblems with t < 0 are trivial, so it seems rather silly to memoize them.16 Indeed, we can modify the recurrence so that those subproblems never arise:
SS(i,t)=
True
False
SS(i + 1, t) SS(i + 1, t) ∨
if t = 0
if i > n
if t > X [i] otherwise
SS(i + 1, t − X [i]) 16Yes, I’m breaking my own rule against premature optimization.
20
3.9. Optimal Binary Search Trees
• Data structure: We can memoize our recurrence into a two-dimensional array S[1..n+1,0..T], where S[i,t] stores the value of SS(i,t).
• Evaluation order: Each entry S[i, t] depends on at most two other entries, both of the form SS[i + 1, ·]. So we can fill the array by considering rows from bottom to top in the outer loop, and considering the elements in each row in arbitrary order in the inner loop.
• Space and time: The memoization structure uses O(nT) space. If S[i+1, t] and S[i + 1, t − X [i]] are already known, we can compute S[i, t] in constant time, so the algorithm runs in O(nT) time.
Here is the resulting dynamic programming algorithm:
SubsetSum(X [1 .. n], T ): S[n+1,0]←True
for t ← 1 to T
S[n+1,t]←False
for i ← n downto 1 S[i,0]=True
for t ← 1 to X [i] − 1 S[i,t]←S[i+1,t]
〈〈Avoidthecase t <0〉〉
for t ← X[i] to T
S[i, t] ← S[i + 1, t] ∨ S[i + 1, t − X [i]]
return S[1, T ]
The worst-case running time O(nT) for this algorithm is a significant improvement over the O(2n)-time recursive backtracking algorithm when T is small.17 However,ifthetargetsumTissignificantlylargerthan2n,thisiterative algorithm is actually slower than the naïve recursive algorithm, because it’s wasting time solving subproblems that the recursive algorithm never considers. Dynamic programming isn’t always an improvement!18
3.9 Optimal Binary Search Trees
The final problem we considered in the previous chapter was the optimal binary search tree problem. The input is a sorted array A[1 .. n] of search keys and an array f [1 .. n] of frequency counts, where f [i] is the number of times we will
17Even though the Subset Sum problem is NP-complete, this time bound does not imply that P=NP, because T is not necessary bounded by a polynomial function of the input size.
18In the 1967 research memorandum(!) where he proposed memo functions, Donald Michie wrote, “To tabulate values of a function which will not be needed is a waste of space, and to recompute the same values more than once is a waste of time.” But in fact, tabulating values of a function that will not be needed is also a waste of time!
21
3. DYNAMICPROGRAMMING
search for A[i]. Our task is to construct a binary search tree for that set such that the total cost of all the searches is as small as possible.
Fix the frequency array f , and let OptCost(i, k) denote the total search time in the optimal search tree for the subarray A[i .. k]. We derived the following recurrence for the function OptCost:
OptCost(i,k)= k
f [i] + min i≤r≤k
+ OptCost(r + 1, k)
otherwise
0
if i > k OptCost(i,r−1)
j=i
You can probably guess what we’re going to do with this recurrence eventually, but let’s rid of that ugly summation first.
For any pair of indices i ≤ k, let F(i,k) denote the total frequency count for all the keys in the interval A[i .. k]:
k F(i,k) := f [j]
j=i
This function satisfies the following simple recurrence:
f [i] if i = k F(i,k)= F(i,k−1)+ f[k] otherwise
We can compute all possible values of F(i,k) in O(n2) time using—you guessed it!—dynamic programming! The usual mechanical steps give us the following dynamic programming algorithm:
We will use this short algorithm as an initialization subroutine. This initialization allows us to simplify the original OptCost recurrence as follows:
22
OptCost(i, k) =
0
F[i,k]+ min i≤r≤k
OptCost(i, r − 1) + OptCost(r + 1, k)
if i > k otherwise
Now let’s turn the crank.
InitF( f [1 .. n]): for i ← 1 to n
F[i,i−1]←0 for k ← i to n
F[i, k] ← F[i, k − 1] + f [k]
• Subproblems: Each recursive subproblem is specified by two integers i and k, such that 1 ≤ i ≤ n + 1 and 0 ≤ k ≤ n.
• Memoization: We can store all possible values of OptCost in a two- dimensional array OptCost[1 .. n + 1, 0 .. n]. (Only the entries OptCost[i, j] with j ≥ i − 1 will actually be used, but whatever.)
• Dependencies: Each entry OptCost[i, k] depends on the entries OptCost[i, j − 1] and OptCost[ j + 1, k], for all j such that i ≤ j ≤ k. In other words, each table entry depends on all entries either directly to the left or directly
below.
k
The following subroutine fills the entry OptCost[i, k], assuming all the entries it depends on have already been computed.
• Evaluation order: There are at least three different orders that can be used to fill the array. The first one that occurs to most students is to scan through the table one diagonal at a time, starting with the trivial base cases OptCost[i, i − 1] and working toward the final answer OptCost[1, n], like so:
i
3.9. Optimal Binary Search Trees
ComputeOptCost(i, k): OptCost[i, k] ← ∞ for r ← i to k
tmp ← OptCost[i, r − 1] + OptCost[r + 1, k] if OptCost[i, k] > tmp
OptCost[i, k] ← tmp OptCost[i, k] ← OptCost[i, k] + F [i, k]
OptimalBST( f [1 .. n]): InitF( f [1 .. n])
for i ← 1 to n + 1
OptCost[i, i − 1] ← 0 for d ← 0 to n − 1
for i ← 1 to n − d ComputeOptCost(i, i + d)
return OptCost[1, n]
〈〈…or whatever〉〉
We could also traverse the array row by row from the bottom up, traversing each row from left to right, or column by column from left to right, traversing each columns from the bottom up.
23
3. DYNAMICPROGRAMMING
OptimalBST2( f [1 .. n]): InitF( f [1 .. n])
for i ← n + 1 downto 1
OptCost[i, i − 1] ← 0 for j ← i to n
ComputeOptCost(i, j) return OptCost[1, n]
OptimalBST3( f [1 .. n]): InitF( f [1 .. n])
for j ← 0 to n + 1
OptCost[ j + 1, j] ← 0 for i ← j downto 1
ComputeOptCost(i, j) return OptCost[1, n]
As before, we can illustrate these evaluation orders using a double-lined arrow to indicate the outer loop and single-lined arrows to indicate the inner loop. The bidirectional arrows in the first evaluation order indicate that the order of the inner loops doesn’t matter.
• Time and space: The memoization structure uses O(n2) space. No matter which evaluation order we choose, we need O(n) time to compute each entry OptCost[i, k], so our overall algorithm runs in O(n3) time.
As usual, we could have predicted the final space and time bounds directly from the original recurrence:
24
OptCost(i, k) =
0
F[i,k]+ min i≤r≤k
OptCost(i, r − 1) + OptCost(r + 1, k)
if i > k otherwise
The OptCost function has two arguments, each of which can take on roughly n different values, so we probably need a data structure of size O(n2). On the other hand, there are three variables in the body of the recurrence (i, k, and r), each of which can take roughly n different values, so it should take O(n3) time to compute everything.
3.10 Dynamic Programming on Trees
So far, all of our dynamic programming examples use multidimensional arrays to store the results of recursive subproblems. However, as the next example shows, this is not always the most appropriate date structure to use.
An independent set in a graph is a subset of the vertices with no edges between them. Finding the largest independent set in an arbitrary graph is extremely hard; in fact, this is one of the canonical NP-hard problems we will
3.10. Dynamic Programming on Trees
study in Chapter ??. But in some special classes of graphs, we can find largest independent sets quickly. In particular, when the input graph is a tree with n vertices, we can actually compute the largest independent set in O(n) time.
Suppose we are given a tree T. Without loss of generality, suppose T is a rooted tree; that is, there is a special node in T called the root, and all edges are implicitly directed away from this vertex. (If T is an unrooted tree—a connected acyclic undirected graph—we can choose an arbitrary vertex as the root.) We call vertex w a descendant of vertex v if the unique path from w to the root includes v; equivalently, the descendants of v are v itself and the descendants of the children of v. The subtree rooted at v consists of all the descendants of v and the edges between them.
For any node v in T , let MIS(v) denote the size of the largest independent set in the subtree rooted at v. Any independent set in this subtree that excludes v itself is the union of independent sets in the subtrees rooted at the children of v. On the other hand, any independent set that includes v necessarily excludes all of v’s children, and therefore includes independent sets in the subtrees rooted at v’s grandchildren. Thus, the function MIS obeys the following recurrence, where the nonstandard notation w ↓ v means “w is a child of v”:
MIS(v)=max MIS(v), 1+MIS(x)
w↓v w↓v x↓w We need to compute MIS(r), where r is the root of T.
?✓
— or —
????
??????
Figure 3.5. Computing the maximum independent set in a tree
What data structure should we use to memoize this recurrence? The most natural choice is the tree T itself! Specifically, for each vertex v in T, we store the result of MIS(v) in a new field v.MIS. (In principle, we could use an array instead, but then we’d have to pointers back and forth between each node and its corresponding array entry, so why bother?)
What’s a good order to consider the subproblems? The subproblem associ- ated with any node v depends on the subproblems associated with the children and grandchildren of v. So we can visit the nodes in any order we like, provided that every vertex is visited before its parent; in particular, we can use a standard post-order traversal.
What’s the running time of the algorithm? The non-recursive time associated with each node v is proportional to the number of children and grandchildren
25
3. DYNAMICPROGRAMMING
of v; this number can be very different from one vertex to the next. But we can turn the analysis around: Each vertex contributes a constant amount of time to its parent and its grandparent! Because each vertex has at most one parent and at most one grandparent, the algorithm runs in O(n) time.
Here is the resulting dynamic programming algorithm. Yes, it’s still recursive, because that’s the most natural way to implement a post-order tree traversal.
MIS(v):
skipv ← 0
for each child w of v
skipv ← skipv + MIS(w)
keepv ← 1
for each grandchild x of v
keepv ← keepv + x . MIS
v. MIS ← max{keepv, skipv} return v.MIS
We can derive an even simpler linear-time algorithm by defining two separate functions over the nodes of T:
• Let MISyes(v) denote the size of the largest independent set of the subtree rooted at v that includes v.
• Let MISno(v) denote the size of the largest independent set of the subtree rooted at v that excludes v.
Again, we need to compute max{MISyes(r),MISno(r)}, where r is the root of T. The first two functions satisfy the following mutual recurrence:
MISyes(v) = 1 + MISno(w) w↓v
MISno(v) = max {MISyes(w), MISno(w)} w↓v
Again, we can memoize these functions into the tree itself, by defining two new fields for each vertex. A straightforward post-order traversal evaluates both functions at every node in O(n) time. The following algorithm not only memoizes both function values at v, it also returns the larger of those two values.
MIS(v):
v.MISno←0
v. MISyes ← 1
for each child w of v
v. MISno ← v. MISno + MIS(w) v. MISyes ← v. MISyes + w. MISno
return max{v. MISyes, v. MISno}
26
In the second line of the inner loop, we are using the value w. MISno that was memoized by the recursive call in the previous line.
Exercises
For all of the following exercises—and more generally when developing any new dynamic programming algorithm—I strongly recommend following the steps outlined in Section 3.4. In particular, don’t even start thinking about tables or for-loops until you have a complete recursive solution, including a clear English specification of the recursive subproblems you are actually solving.19 First make it work; then make it fast.
Sequences/Arrays
3. In a previous life, you worked as a cashier in the lost Antarctican colony of Nadira, spending the better part of your day giving change to your customers. Because paper is a very rare and valuable resource in Antarctica, cashiers were required by law to use the fewest bills possible whenever they gave change. Thanks to the numerological predilections of one of its founders, the currency of Nadira, called Dream Dollars, was available in the following denominations: $1, $4, $7, $13, $28, $52, $91, $365.20
m(a) The greedy change algorithm repeatedly takes the largest bill that does not exceed the target amount. For example, to make $122 using the greedy algorithm, we first take a $91 bill, then a $28 bill, and finally three $1 bills. Give an example where this greedy algorithm uses more Dream Dollar bills than the minimum possible. [Hint: It may be easier to write a small program than to work this out by hand.]
(b) Describe and analyze a recursive algorithm that computes, given an integer k, the minimum number of bills needed to make k Dream Dollars. (Don’t worry about making your algorithm fast; just make sure it’s correct.)
(c) Describe a dynamic programming algorithm that computes, given an integer k, the minimum number of bills needed to make k Dream Dollars. (This one needs to be fast.)
19In my algorithms classes, any dynamic programming solution that does not include an English specification of the underlying recursive subproblems automatically gets a score of zero, even if the solution is otherwise perfect. Introducing this policy in my own algorithms courses significantly improved students’ grades, because it significantly reduced the number of times they submitted incorrect (or incoherent) dynamic programming algorithms.
20For more details on the history and culture of Nadira, including images of the various denominations of Dream Dollars, see http://moneyart.biz/dd/.
Exercises
27
3. DYNAMICPROGRAMMING
4. Describe efficient algorithms for the following variants of the text segmen- tation problem. Assume that you have a subroutine IsWord that takes an array of characters as input and returns True if and only if that string is a “word”. Analyze your algorithms by bounding the number of calls to IsWord.
(a) Given an array A[1 .. n] of characters, compute the number of partitions of A into words. For example, given the string ARTISTOIL, your algorithm should return 2, for the partitions ARTIST·OIL and ART·IS·TOIL.
(b) Given two arrays A[1 .. n] and B[1 .. n] of characters, decide whether A and B can be partitioned into words at the same indices. For example, the strings BOTHEARTHANDSATURNSPIN and PINSTARTRAPSANDRAGSLAP can be partitioned into words at the same indices as follows:
BOT·HEART·HAND·SAT·URNS·PIN
PIN·START·RAPS·AND·RAGS·LAP
(c) GiventwoarraysA[1..n]andB[1..n]ofcharacters,computethenumber of different ways that A and B can be partitioned into words at the same indices.
5. Suppose you are given an array A[1 .. n] of numbers, which may be positive, negative, or zero, and which are not necessarily integers.
(a) Describeandanalyzeanalgorithmthatfindsthelargestsumofelements in a contiguous subarray A[i .. j].
(b) Describe and analyze an algorithm that finds the largest product of elements in a contiguous subarray A[i .. j].
For example, given the array [−6,12,−7,0,14,−7,5] as input, your first algorithm should return 19, and your second algorithm should return 504.
sum=19
12−7 14
product=504
Given the one-element array [−374] as input, your first algorithm should return 0, and your second algorithm should return 1. (The empty interval is still an interval!) For the sake of analysis, assume that comparing, adding, or multiplying any pair of numbers takes O(1) time.
[Hint: Part (a) has been a standard computer science interview question since at least the mid-1980s. You can find many correct solutions on the web; the problem even has its own Wikipedia page! But at least in 2016, a significant fraction of the solutions I found on the web for part (b) were either slower than necessary or actually incorrect.]
−6
0
−7
5
28
6. This exercise explores variants of the maximum-subarray problem (Prob- lem 5). In all cases, your input consists of an array A[1 .. n] of real numbers (which could be positive, negative, or zero) and possibly an additional integer X ≥ 0.
(a) Wrapping around: Suppose A is a circular array. In this setting, a “contiguous subarray” can be either an interval A[i .. j] or a suffix followed by a prefix A[i .. n] · A[1 .. j]. Describe and analyze an algorithm that
finds a contiguous subarray of A with the largest sum.
(b) Long subarrays only: Describe and analyze an algorithm that finds a contiguous subarray of A of length at least X that has the largest sum. (Assume X ≤ n.)
(c) Short subarrays only: Describe and analyze an algorithm that finds a contiguous subarray of A of length at most X that has the largest sum.
(d) The Price Is Right: Describe and analyze an algorithm that finds a contiguous subarray of A with the largest sum less than or equal to X .
(e) Describe a faster algorithm for Problem 6(d) when every number in the array A is non-negative.
7. This exercise asks you to develop efficient algorithms to find optimal subse- quences of various kinds. A subsequence is anything obtained from a sequence by extracting a subset of elements, but keeping them in the same order; the elements of the subsequence need not be contiguous in the original sequence. For example, the strings C, DAMN, YAIOAI, and DYNAMICPROGRAMMING are all subsequences of the string DYNAMICPROGRAMMING.
[Hint: Exactly one of these problems can be solved in O(n) time using a greedy algorithm.]
(a) Let A[1 .. m] and B[1 .. n] be two arbitrary arrays. A common sub- sequence of A and B is another sequence that is a subsequence of both A and B. Describe an efficient algorithm to compute the length of the longest common subsequence of A and B.
(b) Let A[1..m] and B[1..n] be two arbitrary arrays. A common super- sequence of A and B is another sequence that contains both A and B as subsequences. Describe an efficient algorithm to compute the length of the shortest common supersequence of A and B.
(c) Call a sequence X [1 .. n] of numbers bitonic if there is an index i with 1 < i < n, such that the prefix X[1..i] is increasing and the suffix X [i .. n] is decreasing. Describe an efficient algorithm to compute the length of the longest bitonic subsequence of an arbitrary array A of
integers.
Exercises
29
3. DYNAMICPROGRAMMING
30
(d) Call a sequence X [1 .. n] of numbers oscillating if X [i] < X [i + 1] for all even i, and X[i] > X[i + 1] for all odd i. Describe an efficient algorithm to compute the length of the longest oscillating subsequence of an arbitrary array A of integers.
(e) Describe an efficient algorithm to compute the length of the shortest oscillating supersequence of an arbitrary array A of integers.
(f) CallasequenceX[1..n]ofnumbersconvexif2·X[i]
(h) CallasequenceX[1..n]ofnumbersdouble-increasingifX[i]>X[i−2] for all i > 2. (In other words, a double-increasing sequence is a perfect shuffle of two increasing sequences.) Describe an efficient algorithm to compute the length of the longest double-increasing subsequence of an arbitrary array A of integers.
(i) RecallthatasequenceX[1..n]ofnumbersisincreasingifX[i]
(a) Describe a polynomial-time algorithm for the special case B = 2.
(b) Describe an algorithm for arbitrary B that runs in O(nB+c) time for some
fixed integer c.
n(c) Describe an algorithm for arbitrary B that runs in O(nc) time for some
fixed integer c that does not depend on B.
44. A string w of parentheses ( and ) and brackets [ and ] is balanced if it
satisfies one of the following conditions:
• w is the empty string.
• w = (x) for some balanced string x
• w = [x] for some balanced string x
• w=xyforsomebalancedstringsxandy
For example, the string
w = ([()][]()) [()()] () is balanced, because w = x y, where
x = ( [()] [] () ) and y = [ () () ] ().
(a) Describe and analyze an algorithm to determine whether a given string
of parentheses and brackets is balanced.
(b) Describe and analyze an algorithm to compute the length of a longest balanced subsequence of a given string of parentheses and brackets.
(c) Describe and analyze an algorithm to compute the length of a shortest balanced supersequence of a given string of parentheses and brackets.
(d) Describeandanalyzeanalgorithmtocomputetheminimumeditdistance from a given string of parentheses and brackets to a balanced string of parentheses and brackets.
n(e) Describe and analyze an algorithm to compute the longest common balanced subsequence of two given strings of parentheses and brackets.
n(f) Describe and analyze an algorithm to compute the longest palindromic balanced subsequence of a given string of parentheses and brackets.
n(g) Describe and analyze an algorithm to compute the longest common palindromic balanced subsequence (whew!) of two given strings of parentheses and brackets.
For each problem, your input is an array w[1 .. n], where w[i] ∈ {(, ), [, ]} for every index i. (You may prefer to use different symbols instead of parentheses and brackets—for example, L, R, l, r or , , , —but please tell your grader what symbols you’re using!)
n45. Congratulations! Your research team has just been awarded a $50M multi- year project, jointly funded by DARPA, Google, and McDonald’s, to produce DWIM: The first compiler to read programmers’ minds! Your proposal and your numerous press releases all promise that DWIM will automatically correct errors in any given piece of code, while modifying that code as little as possible. Unfortunately, now it’s time to start actually making the damn thing work.
As a warmup exercise, you decide to tackle the following necessary subproblem. Recall that the edit distance between two strings is the minimum number of single-character insertions, deletions, and replacements required to transform one string into the other. An arithmetic expression is a string w such that
• w is a string of one or more decimal digits,
• w = (x) for some arithmetic expression x, or
• w = x ⋄ y for some arithmetic expressions x and y and some binary operator ⋄.
Suppose you are given a string of tokens from the alphabet {#,⋄,(,)}, where # represents a decimal digit and ⋄ represents a binary operator. Describe and analyze an algorithm to compute the minimum edit distance from the given string to an arithmetic expression.
46. Ribonucleic acid (RNA) molecules are long chains of millions of nucleotides or bases of four different types: adenine (A), cytosine (C), guanine (G), and uracil (U). The sequence of an RNA molecule is a string b[1..n], where each character b[i] ∈ {A, C, G, U} corresponds to a base. In addition to the chemical bonds between adjacent bases in the sequence, hydrogen bonds can form between certain pairs of bases. The set of bonded base pairs is called the secondary structure of the RNA molecule.
Exercises
53
3. DYNAMICPROGRAMMING
We say that two base pairs (i,j) and (i′,j′) with i < j and i′ < j′ overlap if i < i′ < j < j′ or i′ < i < j′ < j. In practice, most base pairs are non-overlapping. Overlapping base pairs create so-called pseudoknots in the secondary structure, which are essential for some RNA functions, but are more difficult to predict.
Suppose we want to predict the best possible secondary structure for a given RNA sequence. We will adopt a drastically simplified model of secondary structure:
• Each base can bond with at most one other base.
• Only A–U pairs and C–G pairs can bond.
• Pairsoftheform(i,i+1)and(i,i+2)cannotbond. • Bonded base pairs cannot overlap.
The last (and least realistic) restriction allows us to visualize RNA secondary structure as a sort of fat tree, as shown below.
GCU AU UG
UCA UG
CUUA AGC
C
CCA
C
C
Figure 3.8. Example RNA secondary structure with 21 bonded base pairs, indicated by heavy red lines. Gaps are indicated by dotted curves. This structure has score 22 + 22 + 82 + 12 + 72 + 42 + 72 = 187.
(a) Describe and analyze an algorithm that computes the maximum possible number of bonded base pairs in a secondary structure for a given RNA sequence.
(b) A gap in a secondary structure is a maximal substring of unpaired bases. Large gaps lead to chemical instabilities, so secondary structures with smaller gaps are more likely. To account for this preference, let’s define the score of a secondary structure to be the sum of the squares of the gap lengths; see Figure 3.8. (This score function is utterly fictional; real RNA structure prediction requires much more complicated scoring functions.)
Describe and analyze an algorithm that computes the minimum possible score of a secondary structure for a given RNA sequence.
AC G
UC G
C GUACUU
AU
C
A
A AA
AU
UU GG
54
U AU
AU A
UU AA
CA U
A
U
A UG
A G
U
U
p47. (a) Describe and analyze an efficient algorithm to determine, given a string w and a regular expression R, whether w ∈ L(R).
(b) Generalizedregularexpressionsallowthebinaryoperator∩(intersection) and the unary operator ¬ (complement), in addition to the usual • (concatenation), + (or), and ∗ (Kleene closure) operators. NFA constructions and Kleene’s theorem imply that any generalized regular
expression E represents a regular language L(E).
Describe and analyze an efficient algorithm to determine, given a
string w and a generalized regular expression E, whether w ∈ L(E).
In both problems, assume that you are actually given a parse tree for the
(generalized) regular expression, not just a string.
Trees and Subtrees
48. You’ve just been appointed as the new organizer of Giggle, Inc.’s annual mandatory holiday party. The employees at Giggle are organized into a strict hierarchy, that is, a tree with the company president at the root. The all-knowing oracles in Human Resources have assigned a real number to each employee measuring how “fun” the employee is. In order to keep things social, there is one restriction on the guest list: an employee cannot attend the party if their immediate supervisor is also present. On the other hand, the president of the company must attend the party, even though she has a negative fun rating; it’s her company, after all. Give an algorithm that makes a guest list for the party that maximizes the sum of the “fun” ratings of the guests.
49. Since so few people came to last year’s holiday party, the president of Giggle, Inc. decides to give each employee a present instead this year. Specifically, each employee must receive on the three gifts: (1) an all-expenses-paid six- week vacation anywhere in the world, (2) an all-the-pancakes-you-can-eat breakfast for two at Jumping Jack Flash’s Flapjack Stack Shack, or (3) a burning paper bag full of dog poop. Corporate regulations prohibit any employee from receiving exactly the same gift as his/her direct supervisor. Any employee who receives a better gift than his/her direct supervisor will almost certainly be fired in a fit of jealousy.
As Giggle, Inc.’s official party czar, it’s your job to decide which gift each employee receives. Describe an algorithm to distribute gifts so that the minimum number of people are fired. Yes, you may send the president a flaming bag of dog poop.
More formally, you are given a rooted tree T, representing the company hierarchy, and you want to label the nodes of T with integers 1, 2, or 3, so
Exercises
55
3. DYNAMICPROGRAMMING
that every node has a different label from its parent. The cost of an labeling is the number of nodes with smaller labels than their parents. See Figure 3.9 for an example. Describe and analyze an algorithm to compute the minimum-cost labeling of T.
1 2332
312233 211131
3322
1
Figure 3.9. A tree labeling with cost 9. The nine bold nodes have smaller labels than their parents. This is not the optimal labeling for this tree.
50. After the Flaming Dog Poop Holiday Debacle, you were strongly encouraged to seek other employment, and so you left Giggle for rival company Twitbook. Unfortunately, the new president of Twitbook just decided to imitate Giggle by throwing her own holiday party, and in light of your past experience, appointed you as the official party organizer. The president demands that you invite exactly k employees, including the president herself, and everyone who is invited is required to attend. Yeah, that’ll be fun.
Just like at Giggle, employees at Twitbook are organized into a strict hierarchy: a tree with the company president at the root. The all-knowing oracles in Human Resources have assigned a real number to each employee indicating the awkwardness of inviting both that employee and their imme- diate supervisor; a negative value indicates that the employee and their supervisor actually like each other. Your goal is to choose a subset of exactly k employees to invite, so that the total awkwardness of the resulting party is as small as possible. For example, if the guest list does not include both an employee and their immediate supervisor, the total awkwardness is zero. The input to your algorithm is the tree T, the integer k, and the awkwardness of each node in T.
(a) Describe an algorithm that computes the total awkwardness of the least awkward subset of k employees, assuming the company hierarchy is described by a binary tree. That is, assume that each employee directly supervises at most two others.
n(b) Describe an algorithm that computes the total awkwardness of the least awkward subset of k employees, with no restrictions on the company hierarchy.
56
Exercises
51. Suppose we need to broadcast a message to all the nodes in a rooted tree. Initially, only the root node knows the message. In a single round, any node that knows the message can forward it to at most one of its children. See Figure 3.10 for an example.
(a) Designanalgorithmtocomputetheminimumnumberofroundsrequired to broadcast the message to all nodes in a binary tree.
o(b) Designanalgorithmtocomputetheminimumnumberofroundsrequired to broadcast the message to all nodes in an arbitrary rooted tree. [Hint: You may find techniques in the next chapter useful to prove your algorithm is correct, even though it’s not a greedy algorithm.]
Figure 3.10. A message being distributed through a tree in five rounds.
52. One day, Alex got tired of climbing in a gym and decided to take a very large group of climber friends outside to climb. The climbing area where they went, had a huge wide boulder, not very tall, with various marked hand and foot holds. Alex quickly determined an “allowed” set of moves that her group of friends can perform to get from one hold to another.
The overall system of holds can be described by a rooted tree T with n vertices, where each vertex corresponds to a hold and each edge corresponds to an allowed move between holds. The climbing paths converge as they go up the boulder, leading to a unique hold at the summit, represented by the root of T.26
Alex and her friends (who are all excellent climbers) decided to play a game, where as many climbers as possible are simultaneously on the boulder and each climber needs to perform a sequence of exactly k moves. Each climber can choose an arbitrary hold to start from, and all moves must move away from the ground. Thus, each climber traces out a path of k edges in the tree T, all directed toward the root. However, no two climbers are allowed to touch the same hold; the paths followed by different climbers cannot intersect at all.
Describe and analyze an efficient algorithm to compute the maximum number of climbers that can play this game. More formally, you are given a rooted tree T and an integer k, and you want to find the largest possible
26Q: Why do computer science professors think trees have their roots at the top? A: Because they’ve never been outside!
57
3. DYNAMICPROGRAMMING
number of disjoint paths in T , where each path has length k. Do not assume that T is a binary tree. For example, given the tree T below and k = 3 as input, your algorithm should return the integer 8.
Figure 3.11. Seven disjoint paths of length k = 3. This is not the largest such set of paths in this tree.
53. Let T be a rooted binary tree with n vertices, and let k ≤ n be a positive integer. We would like to mark k vertices in T so that every vertex has a nearby marked ancestor. More formally, we define the clustering cost of any subset K of vertices as
cost(K) = max cost(v, K), v
where the maximum is taken over all vertices v in the tree, and cost(v, K) is the distance from v to its nearest ancestor in K:
0 cost(v,K)= ∞
if v ∈ K
ifvistherootofT andv̸∈K otherwise
1 + cost(parent(v))
In particular, cost(K) = ∞ if K excludes the root of T.
11 222
113 222211
3322 11 31133
221122 222233
58
Figure 3.12. A subset of five vertices in a binary tree, with clustering cost 3.
n(a) Describe a dynamic programming algorithm to compute, given the tree T and an integer k, the minimum clustering cost of any subset of k vertices in T. For full credit, your algorithm should run in O(n2k2) time.
(b) Describe a dynamic programming algorithm to compute, given the tree T and an integer r, the size of the smallest subset of vertices whose clustering cost is at most r. For full credit, your algorithm should run in O(nr) time.
(c) Show that your solution for part (b) implies an algorithm for part (a) that runs in O(n2 log n) time.
54. This question asks you to find efficient algorithms to compute the largest common rooted subtree of two given rooted trees. Recall that a rooted tree is a connected acyclic graph with a designated node called the root. A rooted subtree of a rooted tree consists of an arbitrary node and all its descendants. The precise definition of “common” depends on which pairs of rooted trees we consider isomorphic.
(a) Recall that a binary tree is a rooted tree in which every node has a (possibly empty) left subtree and a (possibly empty) right subtree. Two binary trees are isomorphic if and only if they are both empty, or their left subtrees are isomorphic and their right subtrees are isomorphic. Describe an algorithm to find the largest common binary subtree of two given binary trees.
Figure 3.13. Two binary trees, with their largest common (rooted) subtree emphasized.
(b) In an ordered rooted tree, each node has a sequence of children, which are the roots of ordered rooted subtrees. Two ordered rooted trees are isomorphic if they are both empty, or if their ith subtrees are isomorphic for every index i. Describe an algorithm to find the largest common ordered subtree of two ordered trees T1 and T2.
on(c) In an unordered rooted tree, each node has an unordered set of children, which are the roots of unordered rooted subtrees. Two unordered rooted trees are isomorphic if they are both empty, or the subtrees of each root can be ordered so that their ith subtrees are isomorphic for every index i. Describe an algorithm to find the largest common unordered subtree of two unordered trees T1 and T2.
Exercises
59
3. DYNAMICPROGRAMMING
55. This question asks you to find efficient algorithms to compute optimal subtrees in unrooted trees—connected acyclic undirected graphs. A subtree of an unrooted tree is any connected subgraph.
(a) Suppose you are given an unrooted tree T with weights on its edges, which may be positive, negative, or zero. Describe an algorithm to find a path in T with maximum total weight.
(b) Suppose you are given an unrooted tree T with weights on its vertices, which may be positive, negative, or zero. Describe an algorithm to find a subtree of T with maximum total weight. [This was a 2016 Google interview question.]
(c) Let T1 and T2 be arbitrary ordered unrooted trees, meaning that the neighbors of every node have a well-defined cyclic order. Describe an algorithm to find the largest common ordered subtree of T1 and T2.
on(d) Let T1 and T2 be arbitrary unordered unrooted trees. Describe an algorithm to find the largest common unordered subtree of T1 and T2.
56. Rooted minors of rooted trees are a natural generalization of subsequences. A rooted minor of a rooted tree T is any tree obtained by contracting one or more edges. When we contract an edge uv, where u is the parent of v, the children of v become new children of u and then v is deleted. In particular, the root of T is also the root of every rooted minor of T .
60
Figure 3.14. A rooted tree and one of its rooted minors.
(a) Let T be a rooted tree with labeled nodes. We say that T is boring if, for each node x, all children of x have the same label; children of different nodes may have different labels. Describe an algorithm to find the largest boring rooted minor of a given labeled rooted tree.
(b) Suppose we are given a rooted tree T whose nodes are labeled with numbers. Describe an algorithm to find the largest heap-ordered rooted minor of T. That is, your algorithm should return the largest rooted minor M such that every node in M has a smaller label than its children in M.
(c) Suppose we are given a binary tree T whose nodes are labeled with numbers. Describe an algorithm to find the largest binary-search-ordered rooted minor of T . That is, your algorithm should return a rooted minor M such that every node in M has at most two children, and an inorder traversal of M is an increasing subsequence of an inorder traversal of T .
(d) Recall that a rooted tree is ordered if the children of each node have a well-defined left-to-right order. Describe an algorithm to find the largest binary-search-ordered minor of an arbitrary ordered tree T whose nodes are labeled with numbers. Again, the left-to-right order of nodes in M should be consistent with their order in T .
n(e) Describe an algorithm to find the largest common ordered rooted minor of two ordered labeled rooted trees.
on(f) Describe an algorithm to find the largest common unordered rooted minor of two unordered labeled rooted trees. [Hint: Combine dynamic programming with maximum flows.]
Exercises
61