CS计算机代考程序代写 compiler algorithm Limits of Computation

Limits of Computation
14 – Robustness of P Bernhard Reus
1
The story so far
• We have discussed “computability”,
• Church-Turing Thesis: it does not matter which notion of computation we use.
• in Complexity: does this matter if we restrict to class P?
2

THIS TIME
Robustness of P
• we discuss theses similar to Church- Turing Thesis but now with added time complexity.
• “robust” means resilient, hard-wearing, so
• for a complexity class this means
resilient under compilation into other languages.
• we focus on class P,as it turns out it is “robust” compared to other classes.
robust
not as robust
3
Cook’s (Invariance)Thesis
4.2 Invariance or Cook’s Thesis 175
14.2 Invariance or Cook’s Thesis 175 efinition14.2 (InvarianceThesis)All“reasonable”sequentialnotionsofcompu-
ation can simulate each other up to a polynomial factor.
Definition14.2 (InvarianceThesis)All“reasonable”sequentialnotionsofcompu-
tation can simulate each other up to a polynomial factor.
1
his is known as Cook’s Thesis in [8, p. 242] which phrases it as follows:
This is known as Cook’s1 Thesis in [8, p. 242] which phrases it as follows:
PL is the same class of problems for all reasonable sequential (that is, nonparallel) compu-
tational models L.
PL is the same class of problems for all reasonable sequential (that is, nonparallel) compu-
tational models L.
This means that whatever (reasonable) sequential computational model we use to
This means that whatever (reasonable) sequential computational model we use to ecide prLoibklemCsT, tTh,etdhiifsfeirsenace(winidruenlyninbgeltiemvewdi)lltbhesaispolynomial factor. Similarly
decide problems, the difference in running time will be a polynomial factor. Similarly
o the Ch(u“rceha–sTonuraibnglethceosmisp, uwteactiaonparlomvioddeeslosm” iesenvoidteance for this thesis, which is,
to the Church–Turing thesis, we can provide some evidence for this thesis, which is, gain, widely believed in computer science. We can give evidence by showing that
formal notion).
again, widely believed in computer science. We can give evidence by showing that
he various instances of P for those models we defined formally are equal. In other the various instances of P for those models we defined formally are equal. In other

We will give some evidence now.
ords that
words that
It is important here that the notions of computations (modTuerlinsgAowfarcdoWminpnuerta19t8io2n)
PSRAM = PTM = PGOTO = PWHILE = PWH1LE CA
=P PSRAM = PTM = PGOTO = PWHILE = PWHLE = PCA
1
in other words

Stephen Arthur Cook

1
D
t T
d t a t w
It is important here that the notions of computations (models of computation) are restricted to sequential ones. This excl4udes true parallel computation that needs
are restricted to sequential ones. This excludes true parallel computation that needs special treatment in complexity which we will address briefly below in Sect. 14.2.1.
special treatment in complexity which we will address briefly below in Sect. 14.2.1. The cellular automata are not excluded since we stipulate that their seed is always
The cellular automata are not excluded since we stipulate that their seed is always
finite. finite.

size of the input,4 one observes polynomial slow-down.
TM ≼lintime−pg −ind GOTO ≼lintime−pg −ind SRAM ≼ptime TM
Lemma 14.2
lintime ptime
Proof The result of compiliTnMg a≼TurinCgAm≼achineTpMrogram into a GOTO program
results in a GOTO program that is at most a constant (program-independent) linear tro-
fPacrotorf sIltowaesr tshaonwtnhebyor[i1g1i]na(lwThuereindgemtaialcshoinf ethperocgornasmtru. cStiomnilcaarlny,baenfyouGnOdT) Othpa 14.2 Invariance or Cook’s Thesis
garaTmuricnagnmbaecchoinmepcialendbientsoimaunlaSteRdAMin plirnoegarramtimweibthy oan(loynae-(dpimroegnrsaimon-ianl)decpeellnudlae automaton. A general cellular automaton with k-dimensional cell space can be sim
constant slowdown. Compiling an SRAM program, however, into a Turing mach Listed below are the main results without detailed proof. The time differ
E
v
i
d
e
n
c
e
f
nal pr
o
gram
by
a pol
o
r
C
ulated by a TM as the seed pattern is finite. Assume the CA in question uses T (n program will produce a Turing machine program that has a time bound that diff
between source and target program can be derived by inspecting the code th steps. The question is how large the non-blank cell space becomes that initially is
fro
compilers generate. More details can be found in [8] again.
mt
he o
n
eo
f th
e or
igi
say, nd large where d is the dimension of the automaton. In each step the non-blan Turing machine program may be significantly slower. This is not surprising as
cell space can only grow by one cell in each direction of each dimension. Thus afte LTuerminmg am1a4c.h1ine head must move across the tape to find the right encoded regist
T (n) steps the grid that is not blank covers at most the following number of cells: and register values (represented in a binary encoding on the Turing machine ta
lintime−pg −ind lintime−pg −ind ptime
By carefully mTMoni≼toring how fa􏰃rGtOheTOhea≼d moves in theSRwAoMrst≼case iTnMterms of
4 􏰓T(n)􏰔d
size of the input, one observes polynomial slow-down.
d
178 n +proo2f by=ca(rnef+ul2aTna(nly)s)is of compilation14resRuoltbsust
o
yno
mia
l. T
h
is m
ea
ns t
o
k

s
t
h
e
hat
th
e
result
s
i
s
Proof The result of compiling a Turing machine program into a GOTO pro
Lemma 14.2 k=1
results in a GOTO program that is at most a constant (program-independent) l TM ≼lintime CA ≼ptime TM
Proof From Lemma 14.1 with the auxiliary result Lemma 13.3 about si factor slower than the original Turing machine program. Similarly, any GOTO
If we assume each update of one cell takes time c, altogether for the simulating T relations we can conclude that
gram can be compiled into anpSroRoAfMbypcroargerfauml awnaitlhysiosnolfycaom(priolagtrioanm-riensudletps en we obtain a runtime of (n + 2T (n))d × c × T (n) which describes a polynomia
Proof It was shown by [11] (where details of the construction can be found) t csolonwstdaonwt nsl.owdown. Compiling an SRAM program, however, into a Turing ma aNTouwriunsgemlifatcinhginlemcamna(bs)efsriommputilmLaetcetdurien1l3in,esalpitridmteim14e by a (potnimee-dimensional) cellu
TM≼ GOTO≼ SRAM≼ TM
program will produce a Turing machine program that has a time bound that d
automaton. A general cellular automaton with k-dimensional cell space can be si Theorem14.1 Itholdsthat
from the one of the original program by a polynomial. This means that the res ulated by a TM as the seed pattern is finite. Assume the CA in question uses T
and thus
Turing machine program may be significantly slower. This is not surprising a
steps. The question is how large the non-blank cell space becomes that initially
CA TM GOTO SRAM
P ptim=e P = P ptime= P ptime TM≡ GOTO≡ SRAM≡ TM.
Turingd machine head must move across the tape to find the right encoded reg say, n large where d is the dimension of the automaton. In each step the non-bl
and register values (represented in a binary encoding on the Turing machine t cell space can only grow by one cell in each direction of each dimension. Thus a
4See Exercise 6.
BFyrocmareLfeumllymma o1n4i.t2orwinitghhtohwe afuaxritlhiaeryheraedsumltoLvems imnath1e3.w3oarbstouctaseiminultaetrimons r
oeflathtieons T (n) steps the grid that is not blank covers at most the following number of cells:
4 ptime 5 ptime swize ocaf nthceoincplutd, eotnheatobTsMerv≼es 􏰃polyCnAomaniadl tshlouws -TdMow≡n.
CA. By Lemma 13.2 we
thereforeget 􏰓 T(n) 􏰔d
Lemma 14.2
CA TM GOTO SdRAM nP+ =P2 =(Pn+2T=(nP)) .
1S4e.e3ExerCcisoeb6h. am–Edmonds Thesis
TM ≼lintime CA ≼ptime TM k=1
177 rnt)
– ine
ences
ers at the
k the
r ers
pe). the
ness of P gram
)
, ing
inear mulation
M pro- dent)
l hat
chine lar
iffers m-
ulting
(n)
s the
is,
isters ank
ape). fter
Proof It was shown by [11] (where details of the construction can be found If we assume each update of one cell takes time c, altogether for the simulating
a Turing machine can be simulated in linear time by a (one-dimensional) ce we obtain a runtime of (n + 2T (n)) × c × T (n) which describes a polyno
ulated by a TM as the seed pattern is finite. Assume the CA in question uses TInheCohreamp. 114.w1 eIht ahvoeldsetehnathat we can compile between GOTO, WHILE, and
14.2.3 Linear Time
d
automaton. A general cellular automaton with k-dimensional cell space can be slowdown.
steps. Th
I
N
L
t
e qu
est
ion is
ho
i
s
n
w la
rge
the
non
o
t
s
Let uds now investigate the effect of those compilations on run time.
say, n large where d is the dimension of the automaton. In each step the non-
PCA = PTM = PGOTO = PSRAM
cell space can only grow by one cell in each direction of each dimension. Thus
Lemma 14.3
T (n) steps the grid that is not blank covers at most the following number of ce
4See Exercise 6.
-b
o
r
lank
cel
GOTO ≼lintime−pg −ind WHILE ≼lintime WH1LE
WH1LE ≼lint􏰃ime−pg −ind WHILE ≼lintime GOTO 􏰓 T(n)􏰔d
14.2 Invariance or Cook’s Thesis d n + 2 = (n + 2T (n))
Proof WH1LE ≼lintime−pg −ind WHILE holds trivially (with linear factor 1) 1 k=1 lintime 1
TWhHeoLreEmpr1o4g.r2am is already a WHILE program. WHILE ≼ WH LE is disc
WHILE WH1LE wGeOoTbOtawineanereudnttiomreecoafll(nfro+m2STe(cnt).)11×.8.2c ×howT(tnh)ewcohmicphildaetisocnribweosrkasp.oSliynnc
lintimGe−OTpgO−ind
IEf xweercaisseu3maenedaGchOTuOpd≼atLeIoNf one c=ellLWtIaHNkIesLEtimine=EcxL, eaIrlNctoisge4th.eTrofsoereththeastiWmHuIlaLtiEn

d
silsowqudiotewsni.milar to WHILE there are only two issues to address:
This proves that LIN is not as robust as P or one might argue it is not robust
Linear time only robust for “similar” languages
since when compiling into Turing machines, for instance, one might lose linea T•heCoormempil1e4a.1waIytthoeldwshtihlea-tloops:while E {S}isreplacedbyacondition
bounds. ThTiosoalrsoesitsreivcitdiveencfeofroarltlhneosptieocniasl rooflecothmatpPuthatsioinnc.omplexity the •
plus an extra unconditional jump at the end of loop body S back to the ins CA TM GOTO SRAM
with the conditional jumPp. O=bvPiou=slyPthe ex=traPjump adds only constant t while-loop.
o
l sp
b
ace
be
u
s
com
es that initial
4• Compile away complex expressions, “splitting” them into GOTO assignments that 6
can add one operator on each expression only. So a complex expression E with
T E = n uses at most n − 1 operators hd, tl or cons (see Exercise 5). To The Cobham–Edmons thesis is also known as Cook-Karp thesis in various textbooks,
compile expression E into a GOTO program it needs to be built up using additional see e.g. [2, Chap. 1.2]:
assignments X:=hd X, X:=tl X, or X:=cons X Y, which each cost 3 or 4
) that
TM
llular mial
sim-
T(n) WH1LE.
ly is, blank after
lls:
179
as each
ussed in
at all,
lintime
g≼TM oemGiOalTO
r time al jump
ory. truction
ime per

Theorem 14.2
This proves that LIN is not as robust as P or one might argue it is not robust at all,
LINGOTO = LINWHILE = LINWH1LE
since when compiling into Turing machines, for instance, one might lose linear time bounds. This also is evidence for the special role that P has in complexity theory.
Is P the bee’s knees then? 14.3 Cobham–Edmonds Thesis
The Cobham–Edmons thesis is also known as Cook-Karp thesis in various textbooks, Can we even go as far a this:
exactly those in P.
also often called Cook-Karp thesis
Again, this is only a thesis, because “tractable problems” is not a formally defined
class. The statement is quite a strong one for which there is much less evidence than
see e.g. [2, Chap. 1.2]:
Definition 14.3(C(oCbohbahma-mE–dEmdomnodnssThTehseisi)s) The tractable (feasible) problems are

for Cook’s thesis. It is true that polynomial functions have a much more benign rate
only a thesis (what is a “tractable/feasible problem”)?
of growth than exponential functions and as such are more welcome as time bounds. But there arise two questions:

1. Is every polynomial time bound really a good time bound indicating feasibility? 2. Is every time bound beyond polynomial really a bad time bound indicating
intractability?
not widely believed. Why not? (next slide)
Well, one might answer both questions with “no”. Regarding (1), consider polyno-
20
7
mials with a very high degree, e.g. f (n) = n
really tractable? If we consider Table 13.1 from Sect. 13.1 and add f (n) = n20 then for n = 10 we obtain a value of 31.68 centuries! Surely this is not feasible, and taking a larger exponent like n100 even less so. So in practice, “polynomial” is not automatically a synonym for “good”. On the other hand, one could argue that the
. Is a problem with this time bound
argument (1) is not relevant because one does not know any algorithms that have a polynomial time bound where the degree is higher than 5 or 6.
Regarding (2), there are algorithms with exponential worst case complexity, like the simplex algorithm. It was discovered in 1947 by George Dantzig5 [4] for linear
program
l algorithms.
W
ming a
e
h
nd
on av
y
w
erag
e
d
e it p
erfo
o
n
rms m
The simplex algorithm solves a problem that we discuss in more detail in Sect. 16.6.
Cobham-Edmonds/Cook-Karp
On the other hand, one could argue that argument (2) is not relevant because we
5George Bernard Dantzig (November 8, 1914–May 13, 2005), “the father of linear programming”, was an American mathematician who made important contributions in particular to operations
• Is every polynomial time bound really a good
research and statistics. In statistics, Dantzig solved two open problems, which he had mistaken for
time bound indicating feasibility?
homework after arriving late to a lecture. He told this story himself in [3, p. 301]. Apparently, this was also the basis for the Hollywood movie “Good Will Hunting”.
• Is every time bound beyond polynomial really a bad time bound indicating intractability?
uc
o
h be
t
b
tter
tha
e
np
l
i
e
ol
y
nom
v
ia
8

END
© 2008-21. Bernhard Reus, University of Sussex
Next time:
Next time:
Can we solve more
?
problems given more time?
9