程序代写 CS6515: Graduate Algorithms

Algorithms
or: the Unofficial Guide to the Georgia Institute of Technology’s CS6515: Graduate Algorithms

Copyright By PowCoder代写 加微信 powcoder

Last Updated: May 2, 2020
The only way to get through this course is by solving an uncountably-infinite number of practice problems while fu- eled by copious amounts of caffeine .
If you found these notes useful and are in a generous mood, feel free to fuel my stimulant addiction: shoot me a do- nation on Venmo @george_k_btw or PayPal kudrayvtsev@ sbcglobal.net with whatever this guide was worth to you.
Good luck, and happy studying!
Why do we need to study algorithms, and why these specifically? The most important lesson that should come out of this course—one that is only men- tioned in Chapter 8 of Algorithms and the 4th lecture of the course—is that many problems can be reduced to an algorithm taught here; they are considered the fundamental algorithms in computer science, and if you know them inside-and- out, you can often transform “novel” problems into a problem that can be solved by one of these algorithms.
For example, a complicated problem like “can you make change for $X using coins in a cash register” can be reduced to a Knapsack problem, a quest to “find the longest palin- drome in a string” can be reduced to the finding the Longest Common Subsequence, and the arbitrage problem of finding profitable currency trades on international ex- changes can be reduced to finding negative-weight cycles via Shortest Paths.
Keep this in mind as you work through exercises.
kudrayvtsev

1 Dynamic Programming 8
1.1 FibbingOurWayAlong……………………… 8 1.1.1 RecursiveRelations…………………… 10
1.2 LongestIncreasingSubsequence ………………… 11 1.2.1 BreakingDownSubproblems………………. 11 1.2.2 Algorithm&RuntimeAnalysis …………….. 12
1.3 LongestCommonSubsequence…………………. 13 1.3.1 Step1:IdentifySubproblems ……………… 13 1.3.2 Step2:FindtheRecurrence………………. 14
1.4 Knapsack …………………………… 15 1.4.1 GreedyAlgorithm……………………. 15 1.4.2 OptimalAlgorithm…………………… 16
1.5 KnapsackWithRepetition…………………… 17 1.5.1 SimpleExtension ……………………. 17 1.5.2 OptimalSolution ……………………. 18
1.6 MatrixMultiplication……………………… 19 1.6.1 SubproblemFormulation………………… 19 1.6.2 RecurrenceRelation ………………….. 20
2 Divide & Conquer 22
2.1 AnExerciseinD&C:Multiplication………………. 22
2.2 AnotherExerciseinD&C:Median-Finding . . . . . . . . . . . . . . . 24
2.3 SolvingRecurrenceRelations …………………. 26
2.3.1 Example1:IntegerMultiplication……………. 26
2.3.2 Example2: BetterIntegerMultiplication . . . . . . . . . . . . 28
2.3.3 GeneralForm ……………………… 29
2.4 FastFourierTransform…………………….. 30
3 Graphs 32
3.1 CommonAlgorithms ……………………… 33 3.1.1 Depth-FirstSearch …………………… 33 3.1.2 Breadth-FirstSearch ………………….. 33
3.2 ShortestPaths…………………………. 34 3.2.1 FromOneVertex:Bellman-Ford…………….. 34 3.2.2 FromAllVertices:Floyd-Warshall …………… 35
3.3 ConnectedComponents ……………………. 36 3.3.1 UndirectedGraphs …………………… 37 3.3.2 DirectedGraphs ……………………. 37 3.3.3 AcyclicDigraphs ……………………. 38
3.4 Strongly-ConnectedComponents ……………….. 39 3.4.1 FindingSCCs……………………… 40
3.5 Satisfiability………………………….. 41
kudrayvtsev

3.5.1 Solving2-SATProblems ………………… 43
3.6 MinimumSpanningTrees …………………… 45
3.6.1 GreedyApproach: Kruskal’sAlgorithm . . . . . . . . . . . . . 45
3.6.2 GraphCuts ………………………. 47
3.6.3 Prim’sAlgorithm……………………. 48
3.7 Flow……………………………… 49 3.7.1 Ford-FulkersonAlgorithm ……………….. 50 3.7.2 Edmonds-KarpAlgorithm ……………….. 50 3.7.3 Variant:FlowwithDemands ……………… 51
3.8 MinimumCut…………………………. 52 3.8.1 Max-Flow=Min-CutTheorem …………….. 53 3.8.2 Application:ImageSegmentation ……………. 57
4 Cryptography 60
4.1 ModularArithmetic………………………. 60 4.1.1 ModularExponentiation ………………… 60 4.1.2 Inverses ………………………… 61 4.1.3 Fermat’sLittleTheorem ………………… 63 4.1.4 Euler’sTotientFunction ………………… 64
4.2 RSAAlgorithm ………………………… 64 4.2.1 Protocol………………………… 65 4.2.2 Limitations ………………………. 66
4.3 GeneratingPrimes ………………………. 66 4.3.1 Primality ……………………….. 67
5 Linear Programming 69
5.1 2DWalkthrough ……………………….. 69 5.1.1 KeyIssues……………………….. 71
5.2 Generalization…………………………. 72
5.2.1 StandardForm …………………….. 72
5.2.2 Example: Max-Flow as Linear Programming . . . . . . . . . . 72
5.2.3 AlgorithmOverview ………………….. 73
5.2.4 SimplexAlgorithm …………………… 73
5.2.5 InvalidLPs ………………………. 74
5.3 Duality…………………………….. 74
5.4 MaxSAT …………………………… 78 5.4.1 SimpleScheme …………………….. 78
5.5 IntegerLinearProgramming………………….. 79 5.5.1 ILPisnp-Hard …………………….. 80
6 Computational Complexity 82 6.1 SearchProblems ……………………….. 83 6.1.1 Example:SAT …………………….. 83 6.1.2 Example:k-ColoringProblem……………… 83
kudrayvtsev

6.1.3 Example:MSTs…………………….. 84
6.1.4 Example:Knapsack…………………… 84
6.2 DifferentiatingComplexities ………………….. 85
6.3 Reductions…………………………… 86
6.3.1 3SATfromSAT…………………….. 86 6.3.2 IndependentSets ……………………. 87 6.3.3 Cliques…………………………. 89 6.3.4 VertexCover ……………………… 90 6.3.5 SubsetSum ………………………. 91 6.3.6 Summary ……………………….. 94
6.4 Undecidability…………………………. 95
Additional Assignments 96
Homework #0 97 7.1 Problem1:FromAlgorithms,Ch.0………………. 97 7.2 Problem2:Big-Ordering……………………. 98
Homework #1 100 8.1 CompareGrowthRates…………………….. 100 8.2 GeometricGrowth ………………………. 101 8.3 RecurrenceRelations……………………… 103
9 Reductions 104
III Exam Quick Reference 118 10 Exam 1 119 11 Exam 2 121 12 Exam 3 122
Index of Terms 125
kudrayvtsev

List of Algorithms
1.1 Fib1 (n), a naïve, recursive Fibonacci algorithm. . . . . . . . . . . . . 9
1.2 Fib2 (n), an improved, iterative Fibonacci algorithm. . . . . . . . . . . 9
1.3 LIS1 (S ), an approach to finding the longest increasing subsequence in
aseriesusingdynamicprogramming……………….. 12
1.4 Knapsack(·), the standard knapsack algorithm with no object repe- tition allowed, finding an optimal subset of values to fit in a capacity-
limitedcontainer…………………………. 17
1.5 KnapsackRepeating(·), the generalized knapsack algorithm in which unlimited object repetition is allowed, finding an optimal multiset of
valuestofitinacapacity-limitedcontainer. . . . . . . . . . . . . . . . 18
1.6 An O(n3) time algorithm for computing the cost of the best chain matrix
multiplicationorderforasetofmatrices. . . . . . . . . . . . . . . . . 21
3.1 Explore(G, v), a function for visiting vertices in a graph. . . . . . . . 33
3.2 DFS(G), depth-first search labeling of connected components. . . . . . 34
3.3 Kruskal(·), a greedy algorithm for finding the minimum spanning tree
ofagraph. …………………………… 47
3.4 The Ford-Fulkerson algorithm for computing max-flow. . . . . . . . . . 51
4.1 ModExp(x, y, N ), the recursive fast modular exponentiation algorithm. 61
4.2 Gcd(x, y), Euclid’s algorithm for finding the greatest common divisor. 62
4.3 Egcd(x,y), the extended Euclidean algorithm for finding both the
greatest common divisor and multiplicative inverses. . . . . . . . . . . 63
9.1 A way of relating search and optimization problems using binary search,
appliedspecificallytotheTSP. …………………. 105
kudrayvtsev

Before we begin to dive into all things algorithmic, some things about format- ting are worth noting.
An item that is highlighted like this is a “term” that is cross-referenced wherever it’s used. Cross-references to these vocabulary words are subtly highlighted like this and link back to their first defined usage; most mentions are available in the Index.
I also sometimes include margin notes like the one here (which just links back here) that reference content sources so you can easily explore the concepts further.
1 Dynamic Programming 8
1.1 FibbingOurWayAlong……………………… 8
1.2 LongestIncreasingSubsequence ………………… 11
1.3 LongestCommonSubsequence…………………. 13
1.4 Knapsack …………………………… 15
1.5 KnapsackWithRepetition…………………… 17
1.6 MatrixMultiplication……………………… 19
2 Divide & Conquer 22 2.1 AnExerciseinD&C:Multiplication………………. 22 2.2 AnotherExerciseinD&C:Median-Finding . . . . . . . . . . . . . . . 24 2.3 SolvingRecurrenceRelations …………………. 26 2.4 FastFourierTransform…………………….. 30
3 Graphs 32 3.1 CommonAlgorithms ……………………… 33 3.2 ShortestPaths…………………………. 34 3.3 ConnectedComponents ……………………. 36
kudrayvtsev

ALGORITHMS
3.4 Strongly-ConnectedComponents ……………….. 39 3.5 Satisfiability………………………….. 41 3.6 MinimumSpanningTrees …………………… 45 3.7 Flow……………………………… 49 3.8 MinimumCut…………………………. 52
4 Cryptography 60 4.1 ModularArithmetic………………………. 60 4.2 RSAAlgorithm ………………………… 64 4.3 GeneratingPrimes ………………………. 66
5 Linear Programming 69 5.1 2DWalkthrough ……………………….. 69 5.2 Generalization…………………………. 72 5.3 Duality…………………………….. 74 5.4 MaxSAT …………………………… 78 5.5 IntegerLinearProgramming………………….. 79
6 Computational Complexity 82 6.1 SearchProblems ……………………….. 83 6.2 DifferentiatingComplexities ………………….. 85 6.3 Reductions…………………………… 86 6.4 Undecidability…………………………. 95
Kudrayvtsev
kudrayvtsev

Dynamic Programming
The dynamic programming (commonly abbreviated as DP to make under- graduates giggle during lecture) problem-solving technique is a powerful ap- proach to creating efficient algorithms in scenarios that have a lot of repetitive data. The key to leveraging dynamic programming involves approaching problems with a particular mindset (paraphrased from Algorithms, pp. 158):
From the original problem, identify a collection of subproblems which share two key properties: (a) the subproblems have a distinct ordering in how they should be performed, and (b) subproblems should be related such that solving “earlier” or “smaller” subproblems gives the solution to a larger one.
Keep this mindset in mind as we go through some examples.
1.1 Fibbing Our Way Along. . .
A classic toy example that we’ll start with to demonstrate the power of dynamic programming is a series of increasingly-efficient algorithms to compute the Fibonacci sequence:
0,1,1,2,3,5,8,13,21,34,55,89,144,…
In general, Fn = Fn−1 + Fn−2, with the exceptional base-case that Fn = n for n ≤ 1. The simplest, most naïve algorithm (see algorithm 1.1) for calculating the nth Fibonacci number just recurses on each Fm as needed.
Notice that each branch of the recursion tree operates independently despite them doing almost identical work: we know that to calculate Fn−1 we need Fn−2, but we are also calculating Fn−2 separately for its own sake… That’s a lot of extra work that increases exponentially with n!
Wouldn’t it be better if we kept track of the Fibonacci numbers that we generated as we went along? Enter Fib2(n), which no longer recurses down from Fn but instead
kudrayvtsev

ALGORITHMS
Algorithm 1.1: Fib1 (n), a naïve, recursive Fibonacci algorithm. Input: An integer n ≥ 0.
Result: Fn
if n≤1then
return n end
return Fib1(n−1)+Fib1(n−2)
builds up to it, saving intermediate Fibonnaci numbers in an array: Algorithm 1.2: Fib2 (n), an improved, iterative Fibonacci algorithm.
Input: An integer n ≥ 0. Result: Fn
if n≤1then return n
F := {0, 1}
foreach i ∈ [2, n] do
F[i] = F[i − 1] + F[i − 2] end
return F[n]
The essence of dynamic programming lies in identifying the potential to implement
two main principles:
• avoid repetitive work and instead store it after computing it once. Identify- ing the overlapping subproblems (like the fact that Fn−1 and Fn−2 share large amounts of work) is a key part in developing a dynamic programming approach. This means you should not shy away from high memory usage when imple- menting a dynamic programming algorithm—the speed savings from caching repeated intermediate results outweigh the “cost” of memory.
• avoid recursion and instead use an iterative approach. This point is actually not universal when it comes to dynamic programming and pertains specifically to our course. We could have likewise made algorithm 1.2 pass around an array parameter to a recursive version; this would be an example of a memoization technique.
Kudrayvtsev
kudrayvtsev

CHAPTER 1: Dynamic Programming
Memoization and recursion often go hand-in-hand when it comes to dynamic programming solutions, but this class shies away from them. Some of the walk- throughs may not, though, since it’s (in my opinion) an arbitrary restriction that may make problems harder than they need to be.
1.1.1 Recursive Relations
Let’s look at algorithm 1.1 through a different lens and actually try to map out the recursion tree as it develops.
Suppose we want to calculate F5. . . our algorithm would then try to calculate F4 and F3 separately, which will try to calculate F3 and F2, and so on. . .
Fib1(2) Fib1(1)
Fib1(2) Fib1(1) Fib1(1) Fib1(0) Fib1(1) Fib1(0) Fib1(0) Fib1(1) Fib1(0) Fib1(0)
Notice the absurd duplication of work that we avoided with Fib2(). . . Is there a way we can represent the amount of work done when calling Fib2(n) in a compact way?
Suppose T(n) represents the running time of Fib1(n). Then, our running time is similar to the algorithm itself. Since the base cases run in constant time and each recursive case takes T (n − 1) and T (n − 2), respectively, we have:
T (n) ≤ O(1) + T (n − 1) + T (n − 2)
So T(n) ≥ Fn; that is, our running time takes at least as much time as the Fibonacci number itself! So F50 will take 12,586,269,025 steps (that’s 12.5 billion) to calculate with our naïve formula. . .
The Golden Ratio
Interestingly enough, the Fibonacci numbers grow exponentially in φ, the golden ratio, which is an interesting mathematical phenomenon that oc- curs often in nature.
Kudrayvtsev
kudrayvtsev

ALGORITHMS
The golden ratio is an irrational number:
φ= 1+ 5 ≈1.6180…
φn and the Fibonacci numbers increase exponentially by approximately √ .
1.2 Longest Increasing Subsequence
A common, more-practical example to demonstrate the power of dynamic program- ming is finding the longest increasing subsequence in a series.
A series is just a list of numbers: 5,7,4,−3,9,1,10,4,5,8,9,3
A subsequence is a set of numbers from a series that is still in order (but is not necessarily consecutive):
Thus, what we’re looking for in a longest-increasing-subsequence (or LIS) algorithm is the longest set of ascending numbers that are still in order relative to the original series. For our example, that would be the subsequence:
−3,1,4,5,8,9
To start off a little simpler, we’ll just be trying to find the length. Formally, Longest Increasing Subsequence:
Input: A series of numbers, S = {x1, x2, . . . , xi}.
Output: The length l of the subsequence S′ ⊆ S such that the elements
of S′ are in increasing (ascending) order. 1.2.1 Breaking Down Subproblems
Let’s start by doing the first thing necessary for all dynamic programming problems: identify shared work. Suppose our sequence was one element shorter:
5,7,4,−3,9,1,10,4,5,8,9
Intuitively, wouldn’t everything we need to do stay almost the same? The only thing
that should matter is checking whether or not the new digit 3 can affect our longest
Kudrayvtsev
kudrayvtsev

CHAPTER 1: Dynamic Programming
sequence in some way. And wouldn’t 3 only come into play for subsequences that are currently smaller than 3?
That’s a good insight: at each step, we need to compare the new digit to the largest element of all previous subsequences. And since we don’t need the subsequence itself, we only need to keep track of its length and its maximum value. Note that we track all previous subsequences, because the “best” one at a given point in time will not necessarily be the best one at every point in time as the series grows.
What is a potential LIS’ maximum value? Well, it’s the increasing subsequence that ends in that value! So given a list of numbers: S = {x1,x2,…,xn}, we want a subproblem L(i) that is the longest increasing subsequence of x1..i that, critically, includes xi itself:
L(i)=1+max{L(j)|xj xj ∧ L[j] ≥ m then
m = L[j] end
L[i] = m + 1 end
/* find the best LIS to append to */
return max(L)
The running time of the algorithm is O(n2) due to the double for-loop over (up to)
n elements each.
Let’s move on to another dynamic programming example.
Kudrayvtsev
kudrayvtsev

ALGORITHMS
1.3 Longest Common Subsequence
The list of characters that appear in the same relative order (possibly with gaps) is a common subsequence. For example, given:
X = BCDBCDA Y = ABECBAB
the LCS is BCBA.
Longest Common Subsequence:
Input: Two equal-length strings: X = {x1, x2, . . . , xn} Y = {y1,y2,…,yn}
Output: The length l of their longest common subsequence. How can we find this algorithmically?
1.3.1 Step 1: Identify Subproblems
Dynamic programming solutions are supposed to build upon themselves. Thus, we should naturally expect our subproblems to just be increasingly-longer prefixes of the input strings. For example, suppose we’re three characters in and are analyzing the 4th character:
X = BCD ← B = xi=4 Y =ABE←C=yi=4
Notice that l = 1 before and l = 2 after, since we go from B to BC as our LCS. In general, though, there are two cases to consider, right? Either xi = yi, in which case we know for sure that l increases since we can just append both characters to the LCS, or xi ̸= yi, in which case we need to think a little more.
It’s possible that either the new xi or the new yi makes a new LCS viable. We can’t integrate them into the LCS at the same time, so let’s suppose we only have a new yi:
Y = ABE ← C
Wait, what if we built up X character-by-character to identify where C fits into LCSs? Thatis,wefirstcompareC toX =B,thentoX =BC,thentoX =BCD,tracking the length at each point in time? We’d need a list to track the best l:
Kudrayvtsev
kudrayvtsev

CHAPTER 1: Dynamic Programming
What about if there was already a C? Imagine we instead had X′ = BCC. By our logic, that’d result in the length table
In proper dynamic programming fashion, though, we should as

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com