程序代写代做代考 scheme algorithm Microsoft PowerPoint – lecture27 [Compatibility Mode]

Microsoft PowerPoint – lecture27 [Compatibility Mode]

COMS4236: Introduction to
Computational Complexity

Spring 2018

Mihalis Yannakakis

Lecture 27, 4/24/18

Outline
• Approximability of optimization problems

• Maximum Satisfiability

• Probabilistically Checkable Proofs

• PCP Theorem
• Some consequences on approximability

Decision vs. Optimization Problems
• NP-completeness theory explains the difficulty of solving

many common hard optimization problems.
• To do this, optimization problems have to be first

reformulated as decision problems.
• All NP-complete problems are equally difficult, up to a

polynomial factor in time – But ..
• The NP-hard optimization problems are “equally difficult”

as far as finding the very best solution is concerned
• They are not equally difficult in terms of the quality of the

solutions we can find in polynomial time :
– For some problems we can find very good solutions, very

close to the optimal ones, while for others we cannot.

Approximation
• We measure “goodness” of solutions relative to the

optimal:
– Assume costs (values) of the solutions of a minimization
(resp. maximization) problem are >0

Given instance x, with optimal cost (value) OPT(x),
the approximation ratio of a solution s is

cost(s)
OPT(x) for a minimization problem

OPT(x)
value(s)

for a maximization problem

Note, ratio 1
in both cases

(Note: Some authors use the ratio value/OPT also for maximization
problems, so in that case the ratio would be ≤1 for max problems)

Approximation ratio of an Algorithm
• Maximum approximation ratio over all instances
• If it is unbounded, we often express it as a function (n) of a

parameter n that measures the size of the input (usually, a
“natural” parameter such as #nodes of a graph, #items etc,
not the number of bits)

• – (n) 1 for all algorithms

 (n) =1 means that the algorithm finds always the optimal
solution

Approximability of Problems
• What is the best approximation ratio that we can achieve by

a polynomial time algorithm?
• If problem is NP-hard we cannot hope for ratio =1, but we’d

like to get as close as possible
• Best case: Polynomial Time Approximation Scheme

(PTAS): Can get any ratio r>1, i.e., for every >0 there is a
p-time algorithm with ratio 1+  (where running time
depends on )

• Next best case: Can achieve at least some (hopefully small)
constant ratio

• NP-hard optimization problems can differ drastically in their
approximability properties.

Degrees of Approximability
• Qualitative classification of problems

P

r>1 (PTAS)

r>1

r 1 

Can be approximated
arbitrarily close to 1 in P time
Ex: Euclidean TSP, Knapsack

Can be approximated within
some r>1 but not every r>1
Ex: Metric TSP, Node Cover, Max Sat

Cannot be approximated within
any constant factor r
Ex: Clique, Graph Coloring

Can compute optimum

MAX SAT (Maximum Satisfiability)
• Input: Set of clauses
• Problem: Find a truth assignment that satisfies maximum

number of clauses.

• Cook’s reduction  NP-hard, but reduction very fragile:
In the No case, maybe all but one clause can be
satisfied  ratio = n/(n-1): no constant gap from 1.

• Can satisfy trivially ½ of the clauses: Any truth
assignment or its complement satisfies ½ of the clauses.

• And we can do better.
• But is there a threshold?

Robust Version of Cook’s Reduction [ALMSS]

• There is a constant t<1 and a Ptime reduction from any language L in NP to 3SAT, which for any input string x for L constructs a 3SAT formula such that • xL  there is an assignment that satisfies all the clauses of (same as in usual NP-completeness) • xL  every assignment satisfies at most a fraction t of the clauses (more robust: satisfiability fails noticeably)  we cannot approximate MAX 3SAT with ratio better than 1/t unless P=NP. Usual NP Proof Verifier Vinstance x polynomial time • xL   “proof” y, |y|≤p(|x|) such that V(x,y) accepts (completeness) • xL   “proof” y, |y|≤p(|x|) V(x,y) rejects (soundness) proof y Yes No Probabilistically Checkable Proof: PCP(r(n),q(n)) Verifier Vinstance x randomized polynomial time • xL   “proof” y such that V(x,y) accepts with probability 1 , i.e. for all random z (completeness) • xL   “proof” y, V(x,y) accepts with probability ≤ ½ (soundness) Can replace ½ by any >0.

0 000 0 01 1 1111proof y

• Verifier generates a random string z of O(r(n)) random bits

• then queries O(q(n)) query bits in y – not the whole proof!

Yes

No

PCP Theorem: NP = PCP(logn,1)

• Every L in NP has proofs that can be checked
probabilistically, where the verifier reads only a constant
number of bits in the proof and decides correctly with
high probability

• Original proof in Arora, Lund, Motwani, Sudan, Szegedy:
Journal of ACM 1998

• Simpler proof in Dinur, Journal of ACM 2007
• For a nice overview see Sudan, Comm. of ACM 2009

PCP Theorem: NP = PCP(logn,1)

• PCP(logn,1)  NP (easy direction)

Proof: Suppose LÎPCP(logn,1).
Construct traditional verifier V’(x,y):
Enumerate all strings z of the appropriate length O(logn) –
there are polynomially many –
Simulate V on each of them and accept if V accepts for all.

Robust 3SAT  PCP Theorem
• NP  PCP(logn,1)
Proof: Suppose L in NP.
• From input string x, construct 3SAT formula x such that
• xL  there is an assignment that satisfies all the

clauses of x
• xL  every assignment satisfies at most a fraction t of

the clauses

• proof y for xÎL = satisfying assignment for x

Robust 3SAT  PCP Theorem ctd.
• PCP verifier V:
• Construct x
• Pick at random a clause (i.e. use random string z as index

to a clause), query the bits for the variables in the clause,
and accept if clause is satisfied.

• xL with y satisfying assignment for x  Pr(accept)=1
• xL with y any assignment  Pr(accept) ≤ t

– Can reduce to ≤ ½ by repeating sufficient number of times
(constant number: (-1/logt) times suffice)

• randomness: r(n) = O(log(#clauses)) = O(logn)
• query bits: q(n) =3(-1/logt) = O(1)

PCP Theorem  Robust 3SAT
• Assuming the PCP Theorem we can derive a robust

reduction to 3 SAT from every problem in NP:
• Suppose LÎNP=PCP(logn,1) with PCP verifier V
• On input x for L, will construct 3CNF formula .
• Enumerate the random strings z.

– For each one of them, V reads a set Sz of q bits of y and
accordingly accepts or rejects 

– Boolean function fz({ yi | i Î Sz}) of q=O(1) variables.
Transform to 3CNF  set Cz of at most clauses
in the variables {yi | i Î Sz } and auxiliary variables

• Formula  = union of clauses Cz for all z
• #clauses ≤

2 (1)q O

( ) (log )2 2 2 polynomialr n q O n  

PCP Theorem  Robust 3SAT ctd.

11 2q
( )2 2 clausesr n q

• xÎL   satisfiable

• xL  y. y violates at least one clause from ½ of the sets Cz

 y violates at least clauses from the total of at most( ) 12r n 

 y violates at least a constant fraction of the clauses

( 1)i.e., y satisfies fraction t 1- 2 of clausesq  

Some (Nontrivial) Consequences
for Approximability of Problems

• Node Cover No PTAS
• Metric TSP No PTAS
• Max Ind. Set No cst ratio
• Max Clique No cst ratio
• Graph Coloring No cst ratio
• Set Cover No cst ratio