CS计算机代考程序代写 AI algorithm Analysis of Algorithms

Analysis of Algorithms
V. Adamchik CSCI 570 Lecture 13 University of Southern California
Approximation Algorithms
The Design of Approximation Algorithms,
D. Williamson and D. Shmoys, Cambridge University Press, 2011.

Exam – II
Date: Thursday April 29
Time: starts at 5pm Pacific time Length: 2 hrs and 20 mins
Time Frame: 12 hours
Locations: online, DEN Quiz Practice: will be posted
TA Review: April 27 and April 28
Open book and notes
Use scratch paper
Follow the Honor Code (obey all rules for taking exams) No internet searching
No talking to each other (chat, phone, messenger)

True/False
Multiple Choice
Written Problems:
Exam – II
▪ Network Flow (max flow, circulation)
▪ Linear Programming (standard and dual forms) ▪ NP Completeness (a proof by reduction)
▪ Approximations

Approximation Algorithms
Suppose we are given an NP-hard problem to solve.
Can we develop a polynomial-time algorithm that produces a “good enough” solution?
An algorithm that returns near-optimal solutions is called an approximation algorithm.

Graph Coloring
Given a graph G=(V,E), find the minimum number of colors required to color vertices, so no two adjacent vertices have the same color.
This is NP-hard problem.
Let us develop a solution that is close enough to the optimal solution.

Greedy Approximation
Given G=(V,E) with n vertices.
Use the integers {1,2,3, …, n} to represent colors.
Order vertices by degree in descending order.
Color the first vertex (highest degree) with color 1.
Go down the vertex list and color every vertex not adjacent to it with color 1.
Remove all colored vertices from the list. Repeat for uncolored vertices with color 2.

Example
F JK
D
BC Order: H, K, D, G, I, J, A, B, E, F, C
G H
A
I
E

Formal Definition
Let P be a minimization problem, and I be an instance of P. Let ALG(I) be a solution returned by an algorithm, and let OPT(I) be the optimal solution.
Then ALG(I) is said to be a 𝛼-approximation algorithm for some 𝛼 > 1, if for ALG(I) ≤ 𝛼 ∙ OPT(I).

Maximization Problem
Let P be a maximization problem, and I be an instance of P. Let ALG(I) be a solution returned by an algorithm, and let OPT(I) be the optimal solution.
Then ALG(I) is said to be a 𝛼 -approximation
algorithm for some 0 < 𝛼 < 1, if for ALG(I) ≥ 𝛼 ∙ OPT(I). Vertex cover Given G=(V,E), find the smallest S⊂V s.t. every edge is incident on a vertex in S. Let us design a randomized algorithm for vertex cover. APPROX-VC(G=(V,E)) VC ← Ø (vertex cover) E′ ← E(G) while E′ ≠ Ø do let (u, v) be an arbitrary edge of E′ VC ← VC U {u, v} remove every edge in E′ incident on u or v return VC Example Vertex Cover Lemma. Let M be a matching in G, and VC be a vertex cover, then | VC| ≥ |M| 2-Approximation Vertex Cover Approx-VC(G): M - maximal matching on G VC - take both endpoints of edges in M Return VC Theorem. Let OPT(G) be the size of the optimal vertex cover and VC = ApproxVC(G). Then | VC | ≤ 2∙ OPT(G). Proof. Can we do better than 2-Approximation? Traveling Salesman Problem The traveling salesman problem consists of a salesman and a set of cities (a complete weighted graph). The salesman has to visit each one of the cities starting from a certain one (e.g. the hometown) and returning to the same city. The challenge of the problem is that the traveling salesman wants to minimize the total length of the trip. Theorem. The traveling salesman problem is NP-complete. Proof First, we have to prove that TSP belongs to NP. Secondly, we prove that TSP is NP-hard. Solving TSP Given an undirected complete graph G=(V,E) with edge cost c:E∈R+, find a min cost Hamiltonian cycle. We will consider a metric TSP. In the metric TSP, edge costs have to satisfy the triangle inequality, i.e. for any three vertices x, y, z, c(x,z) ≤ c(x,y) + c(y,z). We develop approximation greedy algorithm. Approximation Greedy Algorithm Approx-TSP(G): 1) Find a MST of G (complete graph) 2) Create a cycle by doubling edges 3) Remove double visited edges 1 10 6 3 7 3 7 2 483744 5 2 5 2161 5 6 Approximation TSP Theorem. Approx-TSP is a 2-approximation algorithm for a metric TSP. Proof. |Approx-TSP| ≤ 2∙|MST| ≤ 2∙|OPT| triangle inequality we can get a spanning tree from HC by removing edges Christofides Algorithm Observe that a factor 2 in the approximation ratio is due to doubling edges. But any graph with even degrees vertices has an Eulerian cycle. (A Eulerian path visits each edge exactly once) Thus we have to add edges only between odd degree vertices Christofides Algorithm Theorem (without proof). Christofides is 3/2 approximation for Metric TSP The algorithm has been known for over 40 years and yet no improvements have been made since its discovery. * Not for the exam General TSP Theorem: If P≠NP, then for ∀𝛼>1 there is NO a
polynomial time 𝛼-approximation of general TSP. Proof.
Suppose for contradiction that such an 𝛼 -approximation algorithm exists. We will use this algorithm to solve the Hamiltonian Cycle problem.
Start with G=(V,E) and create a new complete graph G’ with the cost function
c(u,v) = 1, if (u,v)∈E c(u,v) = 𝛼∙V+1, otherwise

Proof (continue)
If G has HC , then |TSP(G’)| = V. If G has no HC , then
c(u,v) = 1, if (u,v) ∈ E c(u,v) = 𝛼∙V+1, otherwise
General TSP
|TSP(G’)| ≥ (V – 1) + 𝛼∙V + 1 = V + 𝛼∙V ≥ 𝛼 ∙ V
Since the |TSP| differs by a factor 𝛼, our approx. algorithm can be able to distinguish between two cases, thus decides if G has a ham-cycle. Contradiction.

Bin Packing
You have an infinite supply of bins, each of which can hold M maximum weight. You also have n objects, each of which has a weight wi (wi is at most M). Our goal is to partition the objects into bins, such that we use as few bins as possible. This problem in general is NP-hard.
Devise a 2-approximation algorithm.

First-fit approach
1. process items in the original order
2. if an item doesn’t fit the first bin, put it into the next bin
3. if an item does not fit into any bins, open a new bin
Runtime-?
Example,
items = {40, 20, 35, 15, 25, 5, 30, 10}
M = 45
Bin 1 Bin 2 Bin 3 Bin 4 Bin 5

Proof: a 2-approximation
Suppose we use m bins.
Let m* denote the optimal number of bins.
Clearly, m* ≥ (∑wi)/M,
On the other hand, (∑wi)/M > (m-1)/2
This follows form the fact that at least (m-1) bins are more than half full.
Because, if there are two bins less than half full, items in the second bin will be placed into the first by our algorithm.
We have showed, m* ≥ (∑wi)/ M > (m-1)/2 Comparing the left and right hand sides, we derive
2m* > m-1 or, m ≤ 2m*

Developing a lower bound (M=1)
1⁄2-
1⁄2-
1⁄2+ 
1⁄2-
1⁄2+ 
Insert 2N items: 1⁄2 – ,…, 1⁄2 – , 1⁄2 + ,…, 1⁄2 + 
OPT: N bins
FF: N/2 bins + N bins = 1.5 N
FF ≥ 1.5 OPT

Developing a lower bound (M = 1)
1/7 + 
1/3 + 
1/2 + 
1/7+ 
1/7 + 
1/7 + 
1/7 + 
1/7 + 
1/7 + 
1/3 + 
1/3 + 
1/2+ 
3N items: 1/7 +,…, 1/7 + , 1/3 + ,…, 1/3 + , 1⁄2 + ,…, 1⁄2 +  OPT: N bins FF: N/6 + N/2 + N bins = 5/3 N
FF ≥ 1.66 OPT

Developing a lower bound
1/43 + 
1/7 + 
1/3 + 
1/2 + 
OPT: N bins
Observe
1 – (1/2+1/3+1/7) = 1/42
4N items: 1/43 + ,…, 1/43 + , 1/7 + , … , 1/7 + , 1/3 + , … , 1/3 + ,
1⁄2 + ,…, 1⁄2 + 
FF: N/42 + N/6 + N/2 + N = 71/42 N
FF ≥ 1.6905 OPT

Theorem (1976)
This is a 17/10- approximation algorithm.
1/7 + 
Developing a lower bound
Observe, 1 – (1/2+1/3+1/7+1/43) = 1/1806 1/1807 + 
1/43 + 
FF approximation
=
N(1 + 1/2 + 1/6 + 1/42 + 1/1806 + … )
1/3 + 
1/2 + 
= (1.691..) OPT OPT: N bins
In the limit, the ratio is 1.7

Sorted First-fit approach
1. sort the input in descending order 2. apply the first-fit algorithm
Theorem .
This is a 11/9 – approximation algorithm.
Example,
items = {40, 20, 35, 15, 25, 5, 30, 10}
M = 45
Bin 1 Bin 2 Bin 3 Bin 4 Bin 5

Discussion Problem
Suppose you are given a set A of positive integers a1 ,a2 ,…, an and a positive integer B. A subset S ⊆ A is called feasible if the sum of the numbers in S does not exceed B. You would like to select a feasible subset S ⊆ A whose total sum is as large as possible. Give a linear- time algorithm that finds a feasible set S ⊆ A whose total sum is at least half as large as the maximum total sum of any feasible set. You may assume that ai ≤ B.
Example: If A = {8, 2, 4, 3, 5} and B = 11 then the optimal solution is the subset S = {8, 3}.