CS计算机代考程序代写 compiler flex ER interpreter Limits of

Limits of
Computation
11 – Church-Turing Thesis Bernhard Reus
1

The story so far


We have found problems that cannot be solved by a WHILE program (Halting, Busy- Beaver, Tiling, Ambiguity of CFGs, etc).
But this does not mean yet they could not be solved by a different machine model (different definition of “effective procedure”).
2

More ‘notions of computation’


• Register machines • Counter machines • Turing machines
• Goto language
• CellularAutomata

Evidence for the
• Church-TuringThesis
by looking at other notions of computation:
all these
 ‘models of computation’
 are equivalent w.r.t 

the problems that are
 computable in them
and showing (or stating) that they are all equivalent.
3
THIS TIME

Church-Turing Thesis reasonable formalisations of the intuitive notion of effective computability
All reasonable computation models are equivalent.
no restrictions on memory size and execution time are assumed
So it does not matter what model we use. 
 It was alright to use WHILE.
We cannot prove this as it refers to informal entities (reasonable computation models). It is thus only a “thesis”. But we provide some evidence for it.
4

• Models of computation Random Access Machines (register machines): RAM
• • • • • •
Turing machines:TM
Counter machines:CM
Flowchart language: GOTO
Cellular Automata: CA
While language: WHILE ✔
Church’s λ-Calculus (see course Comparative Prog. Languages)
5

Machine instructions
TM, GOTO, CM, RAM have the following in common:
A program is a sequence of instructions with labels


• •
a state or configuration during execution of the program is of the form (l,σ) where l is the current instruction’s label and σ is the “store” which varies from machine to machine.
WHILE does not have ‘machine flavour’, it’s more abstract
CA quite different
l1:I1; l2:I2; … ln:In
6

only allows jumps and those are modelled by considering the p e
i s
a r
n
t h
o r
n
sequently, we can describe the semantics by a “one-step” op
instructions.
anguage, ms, that i
s:
of the l riables o
uces the i
reads out
ons for t achine pr . In othe
instructio 129

chine.
Assuming we know the syntax of the machine l
all we need to define the interpretation of progra
Semantic Framework for Machines
machine language, are the following four ingredient
• a definition of what a store is for the semantics memory, will typically contain the values of va
function Readin producing initial store
• a function Readin that takes the input and prod



definition of store
that is supposed to store data.
function Readout producing output
• a function Readout that takes the final store and
description of semantics of the instructions,
• the description of the semantics of the instructi writtenasp`s!s0 whichmeansthatthem
program p transits form configuration s into s’ in one step
configuration s into configuration s0 in one step state change from s to s0. Note that the machine
7

Based on the third item, the “one-step operational semantics p ` s ! s0, one defines the “many-step-semantics” p ` s !⇤ s0 which shall denote that machine pro-
gram p transits from state s into state s0 in zero, one or many steps. A state is a tuple consisting of the current instruction label to be executed next and the store (or memory). So, a machine state is e.g. (3, s ) where s is a concrete store for the type of machine in question.
Semantic Framework for machines
If we have those three ingredients we can now define the semantics of a machine
program p as follows:
Definition 11.2 (General framework for machine model semantics). Let p be a
machine program with m instructions.
ES TM CHAPTER 11. CT THESIS
JpK(d) = e iff s0 = Readin(d) and e described without defining a more complex relation as
p` (1,s0) !⇤ (m+1,s) and of commands in WHILE. e = Readout(s )
0 m,Inthoeth“eornweo-rsdtsepsodpesrcartiiboensatlheseinmitaianltisctosrepa`ndsth!uss(1,,son)ethe initial state of the
00
machine as execution starts with machine instruction 1. We have reached the final mantics”p`s! s whichshalldenotethatmachinepro-
state if the next instruction is beyond the last instruction of the program. As m is s into state s0 in zero, one or many steps. A state is a
⇤0
“many-step” operationalsemanticsformachinelanguage,zero,one,or
several steps of the “one-step” semantics
the last instruction of our program p we know that if we are at instruction m + 1 the rent instruction label to be executed next and the store (or
program has terminated. One then reads out the result from the final store s using tate is e.g. (3, s ) where s is a concrete store for the type
Readout.
We will now show how semantics of other languages/machines are an instance of this framework.
In the following section we will instantiate this framework by describing the
ngredients we can now define the semantics of a machine
ingredients for the framework in each case: the syntax and then the four ingredients
for the semantic framework.
G MACHIN
sition can b e semantics
he third ite any-step-se
s from state g of the cur
a machine s
question. those three i
ollows:
2 (General framework for machine model semantics). Let p be a
8
n
t m t
n f
.
ram with m instructions.

Turing Machine
forward
tape read/write head
backward
in this example: we have 1 tape
current instruction counter

IC: 4
program
 1: …

2:…
3:…

tape symbols from a given finite alphabet; special symbol B (blank)
9

CHAPTER 11. CT THESIS
instruction
11.3. TURING MACHINES TM explanation
Syntax
Instructions are described in Table 11.1.
j is number of tape
right
move right
in one “rule” into subsequent instructions. Each symbol dependent configuration
left
j
j
move left
change will still be represented by a single particular jump instruction. Our presen-
writej S writeS
tation follows [15, Chap. 3] as this will simplify the time measurement later in the
if j S goto `1 else `2 conditional jump (read) We instantiate the framework from Def. 11.2 for Turing machines:
complexity part.
Table 11.1: Syntax of TM Turing Machine
presented in a way to fit our machine model
instruction explanation
The j subscript indicates which tape the instruction is intended for.
j
rightj move right
Store The stolreefstfor a Turing mmaocvheilnefet are its tapes. For Turing machines we also
S write S
need to be aware of the head position on each tape which is indicated by an
write
j
if j S goto `1 else `2 conditional jump (read)
underscore. So for k-tape machines the store is a k-tuple of the form
we have k tapes Store (=Tapes) (L1S1R1,L2S2R2,…LkSkRk)
Table 11.1: Syntax of TM
where the L and R are strings of symbols of the tape alphabet and S are the “cur- Li, Ri are sitrings of symibols i
The j subscript indicates which tape the instruction is intended for. _ denotes position of read/write head
rently scanned” tape symbols belonging to the tape alphabet. The tape alphabet
Store The stores for a Turing machine are its tapes. For Turing machines we also
needcotontbaeinaswarsepoefcitahlebhleaandkpsoysmitibonoloBn tehaacththtaepetawpehiicshinisitinadlisceatdedwbityhannd that is never
underscore. So for k-tape machines the store is a k-tuple of the form
considered part of the inout or output and a finite number of other symbols. In
Turing’s original definition the only other symbols were 0 and 1. Allowing more (L1S1R1,L2S2R2,…LkSkRk) 3
(finitely many) symbols does not change much .
10
Readin
where the Li and Ri are strings of symbols of the tape alphabet and Si are the “cur-
Readin(x) = (Bx,B,B,…,B)
rently scanned” tape symbols belonging to the tape alphabet. The tape alphabet

Store The stores for a Turing machine are its tapes. For Turing machines we also
where the L and R are strings of symbols of the tape alphabet and S are the “cur-
3 This will be discussed in Exercise 9.
11
iii
need to be aware of the head position on each tape which is indicated by an (L1S1R1,L2S2R2,…LkSkRk)
rentlyscanned”tuanpdersscyorme.Sboflosrkb-teaploenmgacihningesttohetshtoeretiaspaek-tauplplehoafbthetf.orTmhetapealphabet
contains a special blank symbol B that the tape is initialised with and that is never
(L1S1R1,L2S2R2,…LkSkRk)

ols. In
where the Li and Ri are strings of symbols of the tape alphabet and Si are the “cur considered part of the inout or output and a finite number of other symb
rently scanned” tape symbols belonging to the tape alphabet. The tape alphabe Turing’s original definition the only other symbols were 0 and 1. Allowing
where the L and R are strings of symbols of the tape alphabet and S are the “cur- iii
rently scanned” tape symbols belonging to the tape alphabet. The tape alphabet
containsaspecialblanksymcobntoailnsBastphecaiatltbhlaenktsaypmebolisBtihnatitthieatalpiseeisdiniwtiailitshedawnithdantdhtahattisnneverve (finitely many) symbols does not change much3.
considered part of the inout or output and a finite number of other symbols. In considereRdeapdairnt of the inout or output and a finite number of other symbols. I
Turing Machine
Turing’s original definition the only other symbols were 0 and 1. Allowing more 3
(finitely manRy)esyamdbionls(xdo)es=no(t cBhxan,gBe,mBu,c.h…,B)
Turing’s original definition the only other symbols were 0 and 1. Allowing mor
(finitely many) symbols does not change much .
Readin
Readin(x) = (Bx,B,B,…,B)
In other words, the input is stored on the first tape, the head is put left of th
In other words, the input is stored on the first tape, the head is put left of the first input character and all other tapes are initially blank.
Readout
input character and all other tapes are initially blank.
Readin 3
Readin (input) Readin(x) = (Bx,B,B,…,B)
Readout
Readout(LSReaRdo,uLt(LSR,L,.S..RL,..S.LRSR))=Prreefixfi(Rx()R ) Readout(output) 111 212112222kkkkk 1 1
In other words, the input is stored on the first tape, the head is put left of the firs where Prefix(R1BR2) = R1 provided that R1 does not contain any blank symbols
where Prefix(R BR ) = R provided that R does not contain any blank sy input character and all oth1er 2tapes a1re initially blan1k.
B.
11.3. TURING MACHINES TM CHAPTER 11. CT THE
B.
In other words, the output is the word just right of the current head positi
In other words, the output is the word just right of the current head position on ReaOdonue-tstep operational semantics for 1-tape machine (no subscripts)
tape 1 up to, and excluding, the first blank. Readout(LSemSantRics,LInSTabRle 1,1.2.b.eLlolw-StheiRnssetmr)uacn=tiocnsPforreTfiurxin(gRma)chines with 1 tape (it
111222 kkk 1 tape1upto,andiesxpcreltutydoibnvgio,usthoewfitorsgtenbelralniske.that to k tapes). Note thatpis the program
00
p`(`,LSSR)!(`+` 1:I,L;.S.S.`R:)Iif;.pT(h`e)se=conrdiagndhtfourthlineabovedealwiththecaseswhere
where Prefix(R BR ) = R provided that R does not contain any blank symbol 12111m`m 1
Semantics In Table 11.2 below the semantics for Turing machines with 1 t
p ` (`,LS) ! (` + 1,LSB) if p(`) = right
B. isprettyobviouTshihswoiwllbetdoiscgusesendeinrEaxleircsiese9t.hattoktapes).Notethatpisthepr 030
p` (`,LSSR) ! (`+1,LSSR) ifp(`) =left
` :I ;…` :I ;. The second and fourth line above deal with the cases In other wor1ds, t1he oumtput` is the word just right of the current head position o
p` (`,SR) ! (`+1,BSR) ifp(`) =left tape 1 up to, and excluding, the first blank.
m
131 3p` (`,LSR) ! (`+1,LS0R) ifp(`) =writeS0
This will be discussed in Exercise 9.
p` (`,LSR) ! (` ,LSR) ifp(`) =ifSgoto` else`
Semantics In Table 11.2 be1low the semantics for Turing 1machin2es with 1 tape (i
p` (`,LS0R) ! (` ,LS0R) ifp(`) =ifSgoto` else` andS6=S0 212
is pretty obvious how to generalise that to k tapes). Note that p is the progra `1 :I1;…`m :I`m;. The second and fourth lin1e31above deal with the cases wher
Table 11.2: One-step operational semantics for TM
tmore r
n e
e first
t mbols
SIS
on on
s
ape (it
ogram
where n
t m e

instruction code) [17] that was the only language shipped with the first basic desk top computers in the early eighties4. Note also that in GOTO we cannot build com- plex expressi
resp., are var
ons like in WHILE. So all arguments to operators hd, tl, and cons, iables.
the following X, Y, and Z denote program variables. Instructions are as
in Table 11.3.
instruction
X:= nil
X:= Y
X:= hdY
X:= tlY
X:= consYZ
ifX goto `1 else `2
GOTO
explanation
assign nil
assign variable assignhd Y assigntl Y assigncons Y Z conditional jump
How?
• •
Table 11.3: Syntax of GOTO
uses the tree data type we know from WHILE
does only permit operations on variables
store is exactly the same as for WHILE-programs: the store is a set of •
if X tests whether X is not nil and then jumps according to a label
i.e. pairs of GOTO variable names and elements of D. As before, we

store as for WHILE, ReadIn and Readout like WHILE semantics of s as list of pairs, e.g. {X: d,…Z: e} using the same conventions as for
programs.
ores.
e input and output variable are both fixed as X.
Readin(d)= X:d
goto (unconditional jump) 
 can be expressed as well
Syntax In described
Store The bindings,
write store
WHILE-st Readin Th
In other words the initial store maps the input variable to X, all other variables of the program are initialised with nil. The latter is a consequence of how the
{}
12

p (`,s) (`+1,s[X:= nil]) if p(`) =X:=hdYand s(Y) = nil
as Random Access Machine. It is the machine type that comes closest to the actual
`!
p`(`,s) ! (`+1,s[X:=e]) p` (`,s) ! (`+1,s[X:=nil])
if p(`)=X:=tlYand s(Y)=hd.ei if p(`) =X:=tlYand s(Y) = nil
Z)=e l
l
p`(`,s) ! (`+1,s[X:=hd.ei]) if p(`)=X:=consYZand s(Y)=d and s( p` (`,s) ! (`1,s) if p(`) =ifXgoto`1 else`2 and s(X) 6= ni p` (`,s) ! (`2,s) if p(`) =ifXgoto`1 else`2 and s(X) = ni
Sample Program
Table 11.4: One-step operational semantics for GOTO
1: if X goto 2 else 6;
2: Y := hd X;
3: R := cons Y R;
4: X := tl X;
5: if R goto 1 else 1;
6: X := R;
GOTO programs not as readable as WHILE programs
Fig. 11.3: GOTO-program to reverse a list
What does it compute?
11.5 Register Machines RAM and SRAM
RAM actually stands traditionally for Random Access Memory which is
memory that can be directly addressed (and13one does not have to run through the storage sequentially to find the data like e.g. on a Turing machine tape) but here we read it

Semantics of GOTO
CHAPTER 11. CT THESIS 11.5. REGISTER MACHINES RAM AND SRA
store σ (sigma) where variable X is assigned value nil
l-th instruction if p(`) = X:=nil
if p(`) = X:=Y
if p(`)=X:=hdYand s(Y)=hd.ei
if p(`) =X:=hdYand s(Y) = nil
if p(`)=X:=tlYand s(Y)=hd.ei
if p(`) =X:=tlYand s(Y) = nil
if p(`)=X:=consYZand s(Y)=d and s(Z)=e if p(`) =ifXgoto`1 else`2 and s(X) 6= nil
if p(`) =ifXgoto`1 else`2 and s(X) = nil
p`(`,s)! p`(`,s)! p`(`,s)! p`(`,s)! p`(`,s)! p`(`,s)! p`(`,s) ! p`(`,s)! p`(`,s)!
(`+1,s[X:= nil]) (`+1,s[X:= s(Y)]) (`+1,s[X:=d]) (`+1,s[X:= nil]) (`+1,s[X:= e]) (`+1,s[X:=nil]) (`+1,s[X:=hd.ei]) (`1,s)
(`2,s)
Table 11.4: One-step operational semantics for GOTO 1: if X goto 2 else 6;
2: Y := hd X;
3: R := cons Y R;
4: X := tl X;
14
M

p⌃(⇥,)⇥(⇥+1,[i⌅⇥j+1]) IfI = “Xi:=Xi+1” and (i)=j
p⌃(⇥,)⇥(⇥+1,[i⌅⇥j1]) IfI =“Xi:=Xi.-1”and(i)=j⇤=0
p⌃(⇥,)⇥(⇥+1,[i⌅⇥0]) p⌃ (⇥,) ⇥ (⇥,)
p⌃ (⇥,) ⇥ (⇥,)
IfI = “Xi:=Xi.-1” and (i)=0
If I = “if Xi=0 goto ⇥ else ⇥” ⇧(i) = 0 If I = “if Xi=0 goto ⇥ else ⇥” ⇧(i) ⇤= 0
RAM model
Figure 7.4: Counter machine one-step transition rules.
Finite state control (program)
91 Register 0
7 Register 1
13 Register 2 p .
0 Register i . .
Fig•ure 7.5: Picture of a random access machine.
uses arbitrarily many registers containing
arbitrarily big numbers. includes
range of instructions allowed dier from one application to another, but nearly always
1. Copying one register into another.
2. Indirect addressing or indexing. This allows a register whose number has been
computed to be fetched from or stored into.
3. Elementary operations on one or more registers, for example adding or subtracting
15
1, and comparison with zero.

Like most machine languages there are instructions to move data from one reg-
ister to another, there are conditional jumps depending on whether the value of a register is 0 or not, and there are operations like increase or decrease of a register, addition or multiplication of registers (storing the result in a third register). The con- cep
ap reg reg flex
allo Sy
t of indirect addressing is supported in assignments. Indirect addressing allows rogram to access an unbounded number of registers beyond the fixed number of isters mentioned in the program explicitly. Moreover, those indirectly addressed isters can depend on the input of the program. Indirect addressing provides much
(S)RAM instruction set
ibility at runtime. In particular, they allow the efficient implementation of arrays. The successor RAM model, called SRAM, is like the RAM machine but does not
w binary operations on registers.
ntax In the following Xi denotes the ith register. Instructions are listed in Ta-
S
Xi := 0
Xi := Xi+1
Xi:=Xi- ̇1
Xi := Xj
Xi :=
:= Xj
if Xi=0 goto `1 else `2
ble 11.5.
instruction
uccessor
RAM is like RAM but without binary operations
Xi := Xj + Xk
Xi := Xj * Xk
available only in RAM
addition of register values
if-goto-else tests whether

X is 0 and then 
 jumps accordingly.
indirect addressing
move content of register addressed by Xj move into register addressed by Xi conditional jump
data type of natural numbers
angle brackets < >
 indirect addressing

explanation • reset register value
increment register value
decrement register value • move register value
multiplicationofregistervalues Table 11.5: Syntax of SRAM and RAM
The extra angle brackets in assignments indicate indirect addressing. The mean- ingwillbeclearfromthesemanticsbelow.NotethatXi := Xi- ̇1denotesthe
decrementoperationonnaturalnumberswhichstipulatesthat0- ̇1=0.Inorder 16
to highlight this one writes the dot on top of the minus symbol. Extra operations of RAM not available in SRAM are the last two binary operations in Table 11.5.

g t y i
h
R R
t
n h
s
tiplication, or even all functions computed by finite-state automata with output, op-
initially 0.
seen in the literature often have larger instruction sets including addition, mul-
erating on their argments’ binary representations. We will argue that such extensions
Readout
tion, or even all functions computed by finite-state automata with output, op-
on their ar increase the
nomial-time ng extremely e RAM storage
(j) is the c eadin(x) =
eadout() =
hough one p operations a ndedly many
register, so a nonzero va e will see tha
e SRAM one-st
do not increase the class of computable functions. They can, however, aect the class
Readout s s
( )= (0)
gments’ binary representations. We will argue that such extensions
of polynomial-time solvable problems, as the more powerful instructions can allow con-
class of computable functions. They can, however, aect the class structing extremely large values within unrealistically small time bounds.
The result of the program is the value of register 0.
solvable problems, as the more powerful instructions can allow con- The RAM storage has the form
Semantics We give the one-step semantics in Table 11.6. Note that s [i := n] ab- large values within unrealistically small time bounds.
SRAM-store = { | :IN IN} Semantics SRAM
breviates the store that is the same as s with the one exception that the value
has the form
for register i is now set to be number n. As before with other machines, we as- where (j) is the current contents of register Xj. Further,
sumethatpistheprogram` :I;…` :I ;.TheindirectaddressingforXj SRAM-store = { | :IN IN}
1 1 m `m Readin(x) = [0 ⇥ x,1 ⇥ 0,…] Input in register X0
{0:x, 1:0, 2:0,…} urrent contents of register Xj. Further,
Readout() = (0) From register X0
[0E⇥venx,t1ho⇥ugh0,.o.n.e] prIongpruatmincarnegdisirtecrtlXy0reference only a fixed set of registers, the in-
p` (`,s) ! (`+1,s[i := s(i)+1]) if p(`) =Xi:=Xi+1 p`(d`(,i0rs)ec)t!op(e`ra+ti1o,ns[ail:lF=orwosm(aic)cresg1si]st)toereXg0isfteprs(`n)o=t Xapip:e=aXrin- ̇g1inantdhesp(rio)g>ra0m text (perhaps
p`(u`,nsbo)u!nde(d`l+y1m,san[iy:)=. 0O])n the other hifanpd(,`t)h=eXstio:r=e iXsii- ̇n1itialnizdesd(tio)=ze0ro except for its
rogram can directly reference only a fixed set of registers, the in- p` (`,s) ! (`+1,s[i := s(j)]) if p(`) =Xi:=Xj
input register, so at any point during a computation only finitely many registers can
lplo`w(`a,csc)es!s t(o` +reg1i,ste[ir:s=n0o]t) appearing iifn pt(h`e) =prXoig:ra=m0 text (perhaps
contain nonzero values. Consequently the machine state can be represented finitely (in
p` (`,s) ! (` ,s) if p(`) =ifXi=0goto` else` and s(Xi) = 0 112
). On the other hand, the store is initialized to zero except for its fact we will see that an SRAM can be simulated by a Turing machine).
p` (`,s) ! (`2,s) if p(`) =ifXi=0goto`1 else`2 and s(Xi) 6= 0 t any pTohinetSRdAuMrionnge-astecpomtrapnustiatitoinonruolenslyarfiendietfienlyedmaasninyFrieguirsete7r.s6.can
p` (`,s) ! (`+1,s[i := s(s(j))]) if p(`) =Xi:=
lues. Consequently the machine state can be represented finitely (in
p` (`,s) ! (`+1,s[s(i) := s(j)]) if p(`) = :=Xj t an SRAM can be simulated by a Turing machine).
pep` t(r`a,sns)it!ion(`r+ul1e,ssa[rie:=desfi(nj)ed+ass(kin)])Figfupre(`7).=6.Xi:=Xj+Xk p` (`,s) ! (`+1,s[i := s(j)⇥s(k)]) if p(`) =Xi:=Xj⇤Xk
Table 11.6: One-step operational semantics for SRAM and RAM
17

⌃⇥

Any one program can only reference a fixed set of counters. Thus for any store used to
Fi
co th
p⌃(⇥,LSR) p⌃ (⇥,LSR)
⇥ (⇥,LSR) ⇥ (⇥,LSR)
IfI =“ifSgoto⇥”
If I = “if S goto ⇥ else ⇥” and S ⌅= S
describes
gure 7.3: Turing machine one-step transition rules.
Counter Machine CM
unter contents are initially zero except for the input. The following grammar
e CM instruction syntax and so defines CM-prog.
I ::= Xi:=Xi+1|Xi:=Xi.-1|ifXi=0goto⇥else⇥

Counter machines are much simpler than
ometimes the dot will be omitted from . .) Additional computable instructi
added, e.g.Xi:=0,Xi:=Xj,ifXi=0goto ⇥, orgoto ⇥. Such exten register machines.

wever, unnecessary in principle since they are special cases of or can be simula
They contain several registers, called counters as e instruction set above.

CM-store = { | : IN ⇥ IN} 2CM is like CM but with 2 counters only.

they can only be incremented or decremented
A store is a function in
and tested for zero.
here (i) is the current contents of counter Xi for any i ⇤ IN. The store init
Semantics as for register machines.
d result readout functions are defined as follows:
Readin(x) = [0 x,1 0,2 0,…] Input in counter 0
(Sonscould besionsare, hotedusing th
wialization an
⇧⇥ ⇧⇥ ⇧⇥
Readout() = (0) 18 Output from counter 0

Cellular Automata CA we just focus one specific version the famous 2-
dimensional CA called (Conway’s) Game of Life 11.7. CELLULAR AUTOMATA
CHAPTER 11. CT THESIS
John Conway (Cambridge Mathematician)
cell lattice (grid)
each cell contains 0 or 1
neighbourhood of (m,n)
 = 8 cells
the value of a cell changes every “time tick” and the new value is determined only by the values of the neighbourhood cells
(m,n+1) (m+1,n+1)
(m-1,n+1)
(m-1,n)
(m-1,n-1)
(m,n)
(m,n-1)
Fig. 11.6: Moore neighbourhood of (n,m)
(m+1,n)
(m+1,n-1)
(m-1) (m) (m+1)
19

(m-1,n+1) (m,n+1) (m+1,n+1) (m-1,n) (m,n) (m+1,n)
(m-1,n+1) (m,n+1) (m+1,n+1) (m-1,n) (m,n) (m+1,n)
(m-1,n+1) (m,n+1) (m+1,n+1) (m-1,n) (m,n) (m+1,n)
(m-1,n+1) (m,n+1) (m+1,n+1) (m-1,n) (m,n) (m+1,n)
(m+1,n-1)
urhood of (n,m) (m+1)
bourhood of (m)
(m-1,n-1) (m,n-1) (m+1,n-1) (m-1,n-1) (m,n-1) (m+1,n-1)
(m-1,n-1) (m,n-1) (m+1,n-1) (m-1,n-1) (m,n-1)
(m-1) (m) (m+1) (m-1) (m) (m+1)
Rules of Game of Life
Fig. 11.6: Moore neighbourhood of (n,m) Fig. 11.6: Moore neighbourhood of (n,m) Fig. 11.6: Moore neighbourhood of (n,mF)ig. 11.6: Moore neighbo
(m-1) (m) (m+1) (m-1) (m) Fig.11.7:NearestNeighbourhoodof(m) Fig.11.7:NearestNeighbourhoodof(m) Fig.11.7:NearestNeighbourhoodof(m)Fig.11.7:NearestNeigh
alive = 1 (solid line/filled) dead = 0 (no colour)


+
Example 11.6. Let us consider the size oEfxanmeipglheb1o1u.r6h.oLodets udsescorinbseiderinthEexasmiz-e of neighbourhoods described in Exam-
Underpopulation
Survival
Example11.6.LetusconsiderthesizeEoxfamnpeliegh1b1o.6u.rhLoeotdussdceoscnrsiibdedritnheEsxiazme-of
ple 11.5. For the Moore neighbourhood wplee e1a1s.i5ly. Fseoer thaet M|Toore |ne=ig9h.boFuorhothoed we easily see that |T | = 9. For the
one-dimensionalnearestneighbourhoodweonime-mdiemdeiantseiloynsaelenethaarets|Ttneigh|b=ou3rh.oodweimmediatelyseethat|T |=3. nearest nearest
Moore
Definition 11.5. A cellular automaton is aDtuepfilnei(tCio,nT,1S1.,5u.)Awchelrleular automaton is a tuple (C,T,S,u) where
Moore
Moore
die if fewer 
 survive (stay alive) if two or
• T is the neighbourhood template of size•n, T is the neighbourhood template of size n, become alive if exactly three
d•ie(iCf,m+)oirsethe
 (unbounded)12 lattice of •cel(lsC,,+) is the (unbounded)12 lattice of cells, than two neighbours
 three 
 • T is the neighbourhood template of size n,
Overcrowding
Reproduction
ple11.5.FortheMooreneighbourhoodplwee11ea.5s.ilyFosreethtehaMto|Torene|ig=hb9o.uFrhoorotdhewe
one-dimensionalnearestneighbourhoodowneei-mdimednisaitoenlyalsneeatrheast|nTeighbo|u=rh3o.odwei nearest
Definition 11.5. A cellular automaton isDaetfiupnlietio(Cn,1T1,.S5., uA) cwehlleurlear automaton is a tup • (C,+) is the (unbounded)12 lattice of ce•lls,(C,+) is the (unbounded)12 lattice of cells,
than three neighbours
 • T is the neighbourhood template of size n, • S is the finite set of states a cell can assu•meSwisitthhae dfiinsittiengseutisohfesdtasteastea, ceallecdanthaessume with a distinguished state, called the neighbours

• S isthefinitesetofstatesacellcanas•sumSeiswtihtehfiandiitsetisnegtuoifshsteadtesstatec,ecllalclaendathsseum quiescent state 0 (denoting a “blank cell”),qaunidescent state 0 (denoting a “blank ceallr”)e, anldive
are alive neighbours are alive
quiescent state 0 (denoting a “blank cell”q)u, aiaenrsdecenatlisvtaete 0 (denoting a “blank cell”),
12 12
This is in analogy with unbounded size of Turing mThacishinseintapneasl.ogy with unbounded size of Turing machine tapes.
142 142
142 142
12 12
This is in analogy with unbounded size of TuringThmisacishineatnaapleosg.y with unbounded size of Turing m
20
neighbourhoods described easily see that |TMoore| = mmediately see that |Tnearest|
le (C,T,S,u) where
e with a distinguished state,
and
achine tapes.
i 9
c

haviour may evolve (in the shape of patterns). A few simple patterns are depicted omlaepscionngfiVgu”r(astieoenFsidgi.s1ap1p.8eca)r,owvheirchtimdiesalpikpeeathrsea“fCteorl-threanedti“cbkosat(”a(gsaeienFifg.th1e1r.8ebi)swnhoichdonotchangeatallo
u
e
o
l g
n
g
144
l
in Fig. 11.8: There are oscillators like “blinker” (see Fig. 11.8d), which changes
neighbourhood changes. Some configurations disappea in intervals of even and odd time ticks16 or “stills” like “block” (see Fig. 11.8a)
swtihmicuhludsiscaopmpeianrgsfarfotmer threeoeuttiscikdse()a.gain if there is no
tside).
(a ck’
i+3time i
(c) g ‘V’
time i +3 i+4
(e) Diagonal m ider’ in 4 stage
. 11.8: Gam (e)
lapsing V” (see Fig. 11.8c), which disappears after thre
an ne la sti
d “boat” (see Fig. 11.8b) which do not change at all over time, unless their local stimulus coming from the outside).
ighbourhoodi+c1hanges. Some configurations disappear oiv+er1time like the “Col- time i time i
psing V” (see Fig.tim11e.i8c), wi +h1ich disappears after three ticks (again if there is no
time i i + 1 ) Static Pattern ‘Block’ (b) Static Pattern ‘Boat’
Patterns of Game of Life
(b) Static Pattern ‘Boat’ time i i+1
time i i+1 i+2 i+3 i+2 i+3 i+4
(c) Stages of collapsing ‘V’
mulus coming from the outside).
i+1
Stages of collapsing ‘V’
(a) Static Pattern ‘Block’ time i i+1
i+2 i+3
time i i+1 i+2
i+1 time i
i+2 i+3
i+1 i+2 i+3
time i
(d) Oscillating stages of the ‘Blinker’
time i i+1 i+2 i+3 i+4 (d) Oscillating stages of the ‘Blinker’
(e) Diagonal movement of the ‘Glider’ in 4 stages (f) ‘Gosling’s Glider Gun’
i+3 (d) Oscillating stages of the ‘Blinker’

(a) Static Pattern ‘Block’
(b) Static Pattern ‘Boat’
static
oscillating (dynamic)
(c) Stages of collapsing ‘V’ time i i+1 i+2 i+3
i+1
ovement of the ‘Glider’ in 4 stages
s (f) ‘Gosling’s Glider Gun’

time i i+1 i+2 i+3 i+4 productive: 

e of Life patterns
Diagonal movement of the ‘Glider’ in 4 stages
one-cell diagonal movement

Fig. 11.8: Game of Life patterns
Fig. 11.8: Game
Gosling’s Glider Gun
of glider (dynamic)
(f) ‘Gosling’s Glider Gun’
16 Other patterns may need a longer interval to rep
16 Fig. 11.8: Game of Life patterns Other patterns may need a longer interval to repeat their original state.
(f) of Life patter
eat their origina 144
er interval to repeat their original state.
21
(b time i
(d) Osci

Cellular Automata

Fun, but can they compute in our sense? Yes, one has to:

encode input as starting grid
• •
result if a predefined cell changes its state to a predefined “accepting state”

“accepting state” must be left alone in further grid changes.
22

Robustness of Computability
23
Church-Turing Thesis

Robustness of Computability
• •
We justify the Church-Turing thesis …
… by showing that all the above notions of computation
are equivalent to WHILE. Proof technique:

compile one program of language X into an equivalent

• •
one of language Y …
… if X is not a sub-language of Y anyway.
compose compilers appropriately to avoid having to write n2 compilers:
24

Diagram of Equivalences
CHAPTER 11. CT THESIS 11.8. ROBUSTNESS OF COMPUTABILITY
pairing
The label of each arrow is either ⊂ if one language is included in the other or the
name/idea of the required SRAM compilation.

CM
2CM
WHILE
to compile from one language to another just compose the compilers on the arrows you have to travel along
tupling

Bo ̈hm-Jacopini [3]
Expression-split & loops by jumps


RAM
GOTO
2 tapes:
reg.address & content
e.g. [25]
interpreter
CA2 ⇢
tape as two lists
TM
exercise
e.g. [25]
Details of compilations in N. Jones’ book, Chapter 8, exercises.
two-dimensional CA
CA
WH1 LE
CA1
one-dimensional CA

Fig. 11.9: Compilation Diagram: Evidence for Church-Turing Thesis
Below we give short explanations of the compilation processes involved. For more information consult [15, Chap. 8] or the exercises of this chapter.
• one variable as list of variables: WHILE ! WH1LE
The idea of this compiler has been mentioned earlier. We store all values for
25
the variables of the source program in a list that is stored in the one allowed

END
© 2008-21 Bernhard Reus, University of Sussex
Next time: MovingNonexttotCimoem:plexity. Howdowem?easuretime
usage?
26