CS计算机代考程序代写 scheme DNA algorithm Limits of Computation

Limits of Computation
21 – How to ease the pain… (of NP-completeness) Bernhard Reus
1
The story so far
• we don’t know whether P = NP
• we have seen that there are particularly hard problems in NP, the NP-complete problems
• we don’t know any polynomial time solvers for NP-complete problems
• so are we in serious trouble?
2

Getting around infeasibility?
THIS TIME
• how to attack NP- complete problems
(various techniques)
• sometimes an NP- complete problem become essentially an asset
Progress in TSP solutions
www.math.uwaterloo.ca/tsp/methods/progress/progress.htm
3
• Trying to ease the pain clever exact algorithms (heuristic search) 

that work well only for small(er) sizes
• approximation algorithms (just not the optimum)
• parallelism
• randomisation (use probabilities)
• harness natural phenomena (the future?): • molecular (DNA) computing
• quantum computing
4

Exact but for small(er) instances
• Use heuristic search (branch and bound, branch and cut, see AI)
• This will work well but only up to a certain size of input
• This is ok, if that’s all you need.
5
Approximation
we focus on this
 (in context of TSP
mainly)
6

Approximation guaranteed
• function problem version
for optimization(!)-version of NP-complete problems:
• instead of e.g computing the shortest TSP route, colouring with fewest colours, etc., produce a solution in polynomial time that is good (works for you) but is not optimal.
• Some optimization versions of NP-complete problems have good approximation algorithms, yet others do not. There is no uniformity!
• Unfortunately, finding a solution of an NP-complete problem with a guaranteed approximation is very often still NP-complete.
7
CHAPTER 21. ATTACKING NP-COMPLETENESS 21.2. APPROXIMATION Approximation Algorithm
CHAPTER 21. ATTACKING NP-COMPLETENESS 21.2. APPROXIMATION Definition 21.1 (a-approximation algorithm). An algorithm A for a maximisation
Definition 21.1 (a-approximation algorithm). An algorithm A for a maximisation problem is called a-approximation algorithm if for any instance x of the problem
problem is called a-approximation algorithm if for any instance x of the problem
A(x)OPT(x)a⇥A(x) A(x)OPT(x)a⇥A(x)
algorithm A for a minimisation problem is called a-approximation algorithm if for algorithm A for a minimisation problem is called a-approximation algorithm if for
any instance x of the problem we have that any instance x of the problem we have that
OPT(x)  A(x)a ⇥OPT(x) OPT(x)  A(x)  a ⇥ OPT(x)
We call a the approximation factor. In either case it holds that a 1. We call a the approximation factor. In either case it holds that a 1.
There are twokindsoffffacttorrss::Eiiththeerroonneeisisssaatitsifisfieeddwwitihthaacocnosntsatnatnstuscuhchfafcatcotro,ro,ror one demandssometthiingssttrronggeerr,,nnaameelylyththaattththeeoopptitmimalalsosolultuitoinoncacnanbebeapaprporxoix-i- mated witharatiiotthattiissarrbiittrraarriliylycclolosseetoto11..NNooteteththatatififaaisis11thtehnenthteheprporbolbelmem
we have that we have that
where OPT(x) is the optimal value for instance x of the problem. Analogously, an where OPT(x) is the optimal value for instance x of the problem. Analogously, an
canbesolvedexacttlly..Forriinssttaannccee,,ififththeessoolulutitoionnshshaallllbbeennoommoroerethtahta5t05%0%wworosrese
8
thantheoptimallmiiniimallssolluttiioonn,,ththeennoonneehhaassaaccoonnstsatnanttfafcatcotrorofof1.15..5A. Apprporxo-x- imations only make sensse fforr tthhee ffuunncctitoionnpprorobblelemmvveresrisoionnofofthteheNNPP-c-ocmomplpeltete problemswehaveseenandnotttthheeddeeccisisioionnvveersrisoionn..
There is actuallly a complleexxiittyycclalasssfoforrpprorobblelemmssththatatcacnanbebeapaprporxoixmimataetdedinin
this way:

is actually a complexity class for problems that can be approximated in one demands something stronger, namely that the optimal solution can be approxi-
approximatio
as an algorith polynomial ti heme.
21.3 (PTAS
on problems t for any e >
tion which is approximati
e definition o close, so we mes, changin
There are two kinds of factors: Either one is satisfied with a constant such factor, or We call a the approximation factor. In either case it holds that a 1.
There are two kinds of factors: Either one is satisfied with a constant such factor, or mated with a ratio that is arbitrarily close to 1. Note that if a is 1 then the problem
one demands something stronger, namely that the optimal solution can be approxi- can be solved exactly. For instance, if the solution shall be no more that 50% worse
21.2. The complexity class APX (short for “approximable”) is the set of on problems
mated with a ratio that is arbitrarily close to 1. Note that if a is 1 then the problem than the optimal minimal solution, then one has a constant factor of 1.5. Approx-
can be solved exactly. For instance, if the solution shall be no more that 50% worse imations only make sense for the function problem version of the NP-complete
in NP that allow polynomial-time approximation algorithms than the optimal minimal solution, then one has a constant factor of 1.5. Approx-
problems we have seen and not the decision version.
n factor is bounded by a constant.
imations only make sense for the function problem version of the NP-complete
There is actually a complexity class for problems that can be approximated in
Approximation Classes
problems we have seen and not the decision version. this way:
mTthearte cisaanctauaplplyraoxcoimpaletxeitayrcblaistsraforrilpyrocbleomssetthoattchaen boepatpimprouxmimatenddin
Definition 21.2. The complexity class APX (short for “approximable”) is the set of
this way:
o
me
,
t
h
e
p
r
b
l
e
m
in
q
u
e
s
t
i
o
n
h
a
s
a
polynomial time approxi- optimization problems in NP that allow polynomial-time approximation algorithms
N
P
O
:
opt
im
i
sa
ti
o
n,
not
dec
is
io
n
p
r
o
b
lem
s
n
Dwehfienreittihoena2p1p.2ro.xTimheatciomnpfalecxtoitryisclbaosusnAdPedXb(yshaocrtofnosrta“natp.proximable”)isthesetof 21.2. APPROXIMATION CHAPTER 21. ATTACKING NP-COMPLETENESS
ow
!
optimization problems in NP that allow polynomial-time approximation algorithms
If one has an algorithm that can approximate arbitrarily close to the optimum and where the approximation factor is bounded by a constant.
). The problem class PTAS is defined as the class of function itruthnespinropoofslyinoqmuieasltitoimneh,ertheempursotbilnemasiennqseuersetflioencthtahseaenptoirleynporomoifa,lattimleeasatpipnrsoxfai-r
tmhaItaftisonaintsdescmhoaersirmteaecna.tnaelgpssooriisltlyhcemotn’snoctdhemearfntiecnadael.n”saot[p2imp7mer]otNehxioinmatgeaptsthtperaaortrontbhgxietrirPam:CriPlaythcileonsremtsocdthoesomnpoteitm:muatmkheiasanndy it rucnlasiminspaobloyuntotmheiarlutnimtime,ethoef tphreopbrleombabiniliqsutiecsvtieornifihearsotahperoltyhnaonmitiaislptiomlyenaopmpiraol,xis-o
0, there is a polynomial-time algorithm that is guaranteed to Defi21n.i2ti.oAnP2P1R.3O(XPITMAAST)I.OTNhepCrHobAlPemTEcRla2ss1.PTAATTSAisCdKeIfiNnGedNaPs-tChOecMlaPsLsEoTfEfuNnEctSiSon
mattihoenrsucnhteime.may actually be worse than that of a classic verifier. The focus here is
optimisation problems that admit a polynomial time approximation scheme: this at most e times worse than the optimum solution. In other
rather on how much of the certificate is actually checked by the verifier. This is not
Dmefiathneistpitohronaot2fs1oi.r3naq(nPuyeTesAt>ioSn)0.,hTethrhermepruiosbtalienpmoalcyslneanossmsePiraTelfl-AteiScmtiesthadelegefionrteitrdhemapsrtothoaeft,cialsat gslseuaoasrftafniuntnseocetfdiaortno in line with our intuition. Why does a verifier not need to check a large part of the
on factor is 1+e.
ofipntdiamsaistssaotcliuotrniroepncrtownbehsliescmihssicsothnaacttemranodesmdt.ie”t [ta2im7p]eoNslyownteormthseaiattlthtaeinmPteChPeapothpertoimrxeiumadtosieoslnuntsoicothnme.maIkne:oatnhyeisr
certificate showing that an instance x of an NP-complete problem A is actually in A? mweoarcndlasi,mtthhsaetabfaoporuptaronthxyeimeru>antit0oim,ntefhaoecfrteothiresisapr1poo+blayebni.loismtiicalv-etirmifieraolgthoerritthmanthitaitsipsogluyanroamnitaele,dsoto
Imagine x to be a large graph G and a number k and A be TSP. Then the certificate
f PTAS is stronger than that of APX, as we can approximate the runtime may actually be worse than that of a classic verifier. The focus here is
find a solution which is at most e times worse than the optimum solution. In other (proof) must contain enough information to show that a tour of length k or less is
Clearly, the definition of PTAS is stronger tha APX, as we can approximate ratheronhowmuchofthecertificateisactuakedbytheverifier.Thisisnot
worpdoss,stihbeleaipnpGro.ximation factor is 1+e.
have that PTAS ✓ APX (just fix o oWf chyo?ice).
arbitrarily close, so we have that PTAS ✓ AP fix one e of choice).
in line with our intuition. Why does a verifier not need to check a large part of the
Probabilistically checkable proofs have therefore applications in cryptography. g the problem slightly does help as we will see below for
CleScaerolrmyti,fietcthiaemtedeeshfi,oncwihtiainongngtiohnfagtPatThnAeinSpsrtioasnbsclteromxnogsfeliargnthNtalyPn-tdchoametsophlfeeAtlepPpXraos,bawlseemwweAicilaslnasceatepupablreloylxoiiwnmAaf?oter Even more amazing is the connection of the PCP Theorem to the limitations
dified TSP, called MetricTSP. This problem is like TSP but of approximation algorithms. While for some optimization problems approxima-
athrbeiItTmraSarPgili.ynCecolxontsoeid,beseroawlmaeroghdeaivgfiereatdphhTatGSPaT,ncAdaSlalenduAmMPbeeXtrikc(jTuaSnsPtdfi.AxTbhoeinsTepSerPoo.bfTlechmhenoistchelei)k.cerTtiSfiPcabteut
n that of lly chec
ne e X (just
onsider a mo
tra assumption about the distances between cities (weights of edges of the
Probabilistically checkable proofs have therefore applications in cryptography.
e restriction iwWsitethdaianstceuxtshtsreathaissoiun-mcmpaotliroleendaebtaotriulitiatnhnSegedclitse.ta2in1nc.5e.s2qb.uetawleietnychitioesld(wsefigohrtsdofisetdagnescoefst.he

witS(hpoarmonoeeftx)imtmreausas,tscschuoamnntpagtinognentaohbueoguphtriotnhbfeloedrmisatsatilnoigcnhestolbysehdtwowesetnhaectliptaieatsosu(wreoeifgwlheitnlslgotshfeekdbogerelsloewossftfihoser
tion algorithms with small approximation factors were discovered (mostly in the possible in G.
tghreapThS)P..TChoenrseisdterircatiomnoidsifitheadtTthSeP,soca-cllaeldleMdetrtiraicnTgSlePi.nTehqiusaplirtoybhleomldsisfloirkediTstSaPncbeust. 1970s), for others intractability results were quickly obtained, like e.g. for TSP (see
9
Thm. 21.3 below). Those inapproximability results were usually up to P = NP (or
Even more amazing is the connection of the PCP Theorem to the limitations grapIth)is. Tmhuechresetarsicietirotnoicsotmhaet uthpewsoit-hcaallgedortirthiamnsgltehainterquunailnitypohloylndosmfoiarldtiismtaencaensd.
some other very unlikely assumption). But there remained a third group of problems s this in more detail in Sect. 21.5.2.
of approximation algorithms. While for some optimization problems approxima- Wpreoduiscceuaspspthroisximn matiovresdoeltuatiloinsSweictht.o2u1t.a5n.2y.quality bound (so they might produce
for which the situation was not so obvious. In [19] the connection between approxi-
uch easier to comtioen aulgporiwthmitshwaithlgsomraillthapmprsoxtimhattionrufanctoirns wpeorelydinscoomverieadl(mtiomstley iannthde 9
somIteisremalulychpoeoarsirersutoltscofomreceurptawinitihnpaultgso),riothrmthsathoantlyrurnunininpoploylynnoommiailaltitmimeeafnodr pproximative
mationsandprobabilisticproversinthecontextoftheCliqueProblem wasnoticed 1970s), for others intractability results were quickly obtained, like e.g. for TSP (see
spcoreorldt“ uasuit
hcnioeowinanipnspugrtwsoh.xoiHiwtmheauaotnriiuyvsetsicsuaofSfinluecytaiieroncqnhtlusycwagilnotihotbodyeuatabpapopnpruyloiexnqdiudmatoal(ittisryoeondbuoatcuhlgeneodtryhi(teshmom(etxihfgpeooyhrncmtelnipiqgtruihaoetlldpcyrouoluadclrduegcebe)e Thm. 21.3 below). Those inapproximability results were usually up to P = NP (or
somueserdeatlolytepsotowrhresthueltrspfororbcaebritlaisinticianlplyutsc)h,eocrkathbaletopnrolyofrsunexiinstp,oalnydnohmenicaelttiomdeeftoer-
s fosromce eorth
tksel)y, a
thpta
ay
toi fmpreo
aeirnv
eriyn
upnuli
ossrum
iotno).n
Bluyt
certmaiineinmpuetms.bHereshuirpistiincNSPe-acrocmhpclaentebleanagpupaligeds.”to[2re7d]uTcheethdeta(eilxspofntehnitsiaclolynnleacrgtieo)n for which the situation was not so obvio2u8s3. In [19] the connection between approxi-
mations and probabilistic provers in the context of the Clique Problem9 was noticed Scope of Approximation
c Search can be applied to reduce the (exponentially large) are beyond an introductory text and can be found in [19] or summarised in e.g.
[27].
Fro
m
the
resu
lts of
[5]
one
coul
d co
ncl
ud
e tha
t th
e
Cliqu
ep
rob
l
em d
oes not
c Bernhard Reus, Sussex University 2019/20 Limits of Computation “showing how any sufficiently good approximation algorithm for clique could be
283
admit any approximation and is therefore not in APX. Thanks to [6] this could be
used to test whether probabilistically checkable proofs exist, and hence to deter-
generalised to a wide variety of problems. For many NP-complete problems thus minemembershipi2n8N3P-completelanguages.”[27]Thedetailsofthisconnection
For optimisation problems we have the following possibilities:
either
are beyond an introductory text and can be found in [19] or summarised in e.g.
For many NP-complete problems thus either
[27]. From the results of [5] one could conclude that the Clique problem does not
1. foreverya>1thereisapolynomialtimeapproximationalgorithmwithapprox-
1. for every ↵ > 1 there is a polynomial time approximation algorithm with admit any approximation and is therefore not in APX. Thanks to [6] this could be
imation factor a or better (so the problem is in PTAS) or
0-1 Knapsack
threur
enre
imnai
pneodl
tnhiordm
grioa
ulp
bfleomrs
approximation factor ↵ or better (so the problem is in PTAS) or generalised to a wide variety of problems. For many NP-complete problems thus 2. there is a certain constant a > 1, the approximation threshold such that no algo-
either
rithm can produce a solution with approximation factor a or better in polynomial
2. there is a certain constant ↵ > 1, the approximation threshold such that time (unless P = NP) so the problem is in APX
1. foreverya>1thereisapolynomialtimeapproximationalgorithmwithapprox- an algorithm can produce a solution with approximation factor ↵ or better
3. there is no a > 1 that fulfills (2), so there is no approximation algorithm at all imation factor a or better (so the problem is in PTAS) or
in polynomial time, so the problem is in APX
(unless P = NP) and thus the problem is not even in APX. MetricTSP,
2. there is a certain constant a > 1, the approximation threshold such that no algo-
for many problems in APX it is unknown (up to P=NP) 
 Graph 3. there is no ↵ > 1 that fulfills (2), so there is no approximation algorithm rithm can produce a solution with approximation factor a or better in polynomial
The 0-1 Knapsack problem belongs to the first category, MetricTSP to the second
whether they are in PTAS Colouring
at all (unless P = NP) and thus the problem is not even in APX. time (unless P = NP) so the problem is in APX
category and TSP to the third category. There are a number of problems that are
3. there is no a > 1 that fulfills (2), so there is no approximation algorithm at all
known to be in APX but it is unclear whether they belong to (1).
The 0-1 Knapsack problem belongs to the first category, MetricTS to the second
(unless P = NP) and thus the problem is not even in APX.
category and TSP to the third category. There are a number of problems that
are kTnhoew0n-1tKonbaepsianckApProXblebmutbietloisngusntcoletaher wfirhstetchaetergtohrye,yMbetlroicnTgStPot(o1t)h.e second category and TSP to the third category. There are a number of problems that are
TSP
ly poor result puts. Heuristi
known to be in APX but it is unclear whether they belong to (1).
10
What about using more processors? To add up N numbers sequentially with one processor, we know we need N 1 addition operations. But using N/2 processors
we can add pairs of numbers in parallel, first N/2 pairs, then N/4 pairs, and finally one pair, so we only need log2 N additions which leads to a runtime
21.3 Parallelism
9 see Exercise 12 in Chap. 20
n
i e
h
n
i a
u e
h y
i
C x
h s
m a
286

Metric TSP
• The Metric version of TSP, where distances between cities satisfy the triangle inequality
• This does have a polynomial time algorithm that computes solutions approximatively at most 1.5 times the optimal length. 
 (Christofides algorithm)

b d(a,b) + d(b,c

d(a,c) c
a
11
)
Other coping strategies
12

Parallelism
• To speed up time of adding N figures from linear to logarithmic time you need N/2 processors.




 
 

• You can use parallel processors to simulate the “nondeterminism” of NTMs for NP.
8 numbers
4 processors
3 time units instead of 7
13
Parallelism (cont’d)
• Can NP-complete problems be solved in polynomial time on parallel processors?
• only in principle, but not in reality as
• (1) exponentially many processors needed
• (2) for them to communicate may/will again need more than polynomial time!
• Only constant speed-up guaranteed by constant number of processors in use.
• Still good to get speed-up: see Multi-Core Processors
14

Randomization: Vegas V Monte Carlo • two approaches
• Las Vegas: 

always correct but only probably fast

• Monte Carlo: 

always fast, only probably correct;
15
RP: “yes-biased” Monte Carlo randomised polynomial-time
• return “yes” only if correct answer;
• error of rejecting correct answer has a low
probability p>0.
• By (independently) repeating the run n times, probability of rejecting correct answer becomes pn, i.e. as small as you like.
• Define RP as the class of problems decided by a Monte Carlo algorithm in polynomial time.
16

Class RP
• We don’t know whether P=RP ? or RP=NP ? (other
• P ⊆ RP ⊆ NP open problems)
Why?
Bounded-error probabilistic polynomial-time
• One can also define class BPP of problems decided by Monte Carlo algorithms that run in polynomial time and can give incorrect answers on both “Yes” and “No” instances with small probabilities.
• BPP ⊆ NP? and P=BPP? are also open problems!
17
Problems in RP
• Primality test (is a number n a prime number?)
 can be solved in polynomial time (in number of
digits of n) using Monte Carlo: 

[1975 Michael Oser Rabin (Turing Award Winner 1976) 

and Gary L Miller]
• but the Factorisation problem (what are the
M.O. Rabin
prime factors of a number?) is not solved yet in Factorisation is a function
problem but there polynomial time even with randomisation. is also an “up to polynomial time”
problem
• non-randomised deterministic P-algorithm for equivalent decision
Primality test in 2002
Agrawal, Kayal, Saxena (Kanpur)
18

Randomisation • Monte Carlo technique for Optimisation
Problems:
• Simulated Annealing (developed by Scott Kirkpatrick, Gelatt & Vecchi in 1983)
• a random walk between solutions, accept new solution with a certain probability, better solutions more likely but others possible; probability decreases during “cooling down” period of algorithm.
Scott Kirkpatrick
 Turing Award winner 2011
19
Return to Example:TSP
• There is an entire “industry” regarding the efficient solution of TSP.
• TSP is very common and easy to understand.
• TSP became a “benchmark” for any new discrete optimisation algorithm.
20


Example:TSP exact solutions:
• branch and bound: about 40-60 cities
• linear programming: ~86,000 locations

In April 2006 an instance with 85,900 points on a VLSI chip was solved using Concorde TSP Solver,

taking over 136 CPU-years, see Applegate et al. (2006). Simplex algorithm with 
 cutting planes [Dantzig-Fulkerson-Johnson method dates back to 1954]
• heuristic/approximative
freely available C libraries
www.math.uwaterloo.ca/tsp
• NearestNeighbour and many others: up to millions of cities but good solution only for (probable) cases (no guarantees)
• randomised: 700-800 cities
21
How “bad” complexity of a problem can become an asset!
22

When bad complexity is good
Public key cryptography (“RSA” algorithm):
• two primes p and q
Rivest, Shamir, Adleman, MIT 1977; Turing-Award-Winners 2002
• publish the public key computed from their product pq …
• … but not private data like p, q, (p-1)(q-1) that are all used to cipher and decipher

(details in your favourite cryptography lecture/website)
only secure if factorisation is infeasible!
23
decision problem version
• But for security of RSA everybody implicitly relies on the fact that it is a hard problem.
• If NP=P then all of modern encryption (technology) could potentially collapse if the polynomial algorithm is indeed found and its runtime is not too bad.
Fatorisation problem
• We know that the Factorisation problem is in NP.
• We do not know whether it is NP-complete nor whether it is in P or in RP.
24

END
© 2008-21. Bernhard Reus, University of Sussex
Next Time: DNA and Quantum computing
25