程序代写代做代考 algorithm Microsoft PowerPoint – lecture1 [Compatibility Mode]

Microsoft PowerPoint – lecture1 [Compatibility Mode]

COMS4236: Introduction to
Computational Complexity

Spring 2018

Mihalis Yannakakis

Lecture 1, 1/16/18

Course Information

• Lectures:
– Tuesday, Thursday 1:10-2:25
– 103 Knox Hall (but may change soon)

• Web site:
Courseworks

• TAs: Rex Lei, Michael Tong

Course Work

• Homeworks, Final
• Policies

– Late Homework: 10% penalty per late day
– Lowest hw does not count in grade
– Collaboration policy
– Grading: Homeworks 60%, Final 40%

• Homework 0: Info about yourself due Thursday 1/18

Textbook
• Required:
Computational Complexity, by C. Papadimitriou

• Other:
-Computational Complexity: A Modern Approach, by S. Arora,

B. Barak
-Introduction to the Theory of Computation, by M. Sipser
-Introduction to Automata Theory, Languages and

Computation, by Hopcroft, Motwani, Ullman
-Computers and Intractability: A Guide to the Theory of NP-

completeness, by Garey, Johnson

Problems, Algorithms, Models

Problem: What is to be computed

Model: Who computes it

Algorithm: How to compute it

Problems
• Problem: What needs to be computed

Input Output

Example: Graph Reachability problem
input = (graph G, nodes s,t)
output = “Yes” if G has path from s to t, “No” otherwise

Factoring problem
input = positive integer
output = list of its prime factors

• Set of instances (possible inputs)
• For each instance a set of acceptable (correct) outputs
• Problem: Given an instance, compute an acceptable output

Algorithms, Models
• Model of Computation

eg. sequential (von Neumann) computer
parallel computer
quantum computer
Boolean circuit
…..

• Algorithm: Method for solving the problem on the
model of computation

Resources & Complexity
• Computational resources: Time, Space, Processors

(for parallel computation), Communication (for distributed
computation), Area (for a VLSI implementation), ….

• R-Complexity of an algorithm: amount of resource R
that it uses (R=time, space,…)

• R-Complexity of a problem: lowest complexity of an
algorithm that solves the problem wrt resource R of
interest (time, space, etc)

Goals of Complexity Theory
1. Characterize the complexity of problems

and classify them into complexity classes:
classes of problems that can be solved within
certain complexity bounds

– Problems are studied not only in isolation but
also as they relate to each other

Central notions of
– Reduction between problems
– Completeness for a class

Goals of Complexity Theory ctd.

2. Relations between
– resources (time, space etc.)
– complexity classes
– models of computation

– Power and limitations of different
modes/paradigms of computation, eg.
nondeterminism, randomization, parallelism etc.

– “Laws” of Computation

Focus of this course

• Basic Resources: Time, Space mainly
• Basic Models: Turing machines mainly (and

briefly RAM, Boolean combinational circuits)
• Deterministic, nondeterministic, probabilistic

• Problems: Various types: Decision, search,
optimization, games, counting problems

Graph Reachability

• Input: Directed Graph G=(N,E), nodes s, t
• Question: Is there a path in G from s to t?
i.e. Output= “Yes” (if there is a path) or “No” (if

there is no path)
Decision problem: output = Yes/No

Graph may be given by its adjacency matrix A or
by adjacency lists: a list Adj[u] for each u in N
that lists all nodes v to which u has an edge

Example

1

2
3

4

5

0 1 0 1 0
0 0 1 0 0
0 0 0 1 1
0 1 0 0 1
0 0 0 0 0

Graph Adjacency Matrix

A[i,j]=1 iff i→j

Node 1 can reach node 5

Node 4 cannot reach node 1

Graph Search
for each vN-{s} do {mark[v]=0}
mark[s]=1 ;
Q = {s}
while Q  do

{ u= select and delete a node from Q
for each v  N do

if (A[u,v]=1 and mark[v]=0) then
{mark[v]=1; Q=Q{v} }

}
If mark[t]=1 then print “Yes” else print “No”

If Q implemented as a queue (i.e. we delete from front, insert
into back), then Breadth-First-Search

Time Complexity

• For every input graph with n nodes, the time complexity is
proportional to n²: Time=O(n²)

• Some important points to note:
• Complexity is expressed as a function of a parameter n

measuring the size of the input (here the number of
nodes)

• Worst-case complexity: maximum over all inputs of size n
• Expressed complexity in terms of its “rate of growth”:

asymptotic notation O(.) suppresses constant factors

Asymptotic Notation Review:
Theta, Big-Oh, Omega

Big-Oh:
(Order)

))(()( iteusually wr We :Convention

})()(0
:s.t.and0constant |)({))(( 00

ngOnf

ngcnf
nnncnfngO




f(n)
cg(n)

Example:

)(5 2nOn 

but not vice-versa

Caution: = here denotes membership, not equality

0n

Asymptotic Notations:
Theta, Big-Oh, Omega

Omega:

))(()( iteusually wr We :Convention

})()(
:s.t.and0constant |)({))(( 00

ngnf

nfngc
nnncnfng






f(n)

c g(n)

Example:

)(5 2 nn 

but not vice-versa

0n

Asymptotic Notations:
Theta, Big-Oh, Omega

Theta:

))(()( iteusually wr We :Convention

})()()(
:s.t.and0, constants|)({))((

21

0021

ngnf

ngcnfngc
nnnccnfng






f(n)

)(1 ngc

)(2 ngc

0n

Asymptotic Notations:
little-oh, little-omega

little-oh:

})()(0
:s.t.0constant |)({))((

})()(0
:s.t.0constant |)({))((

00

00

nfngc
nnncnfng

ngcnf
nnncnfngo







little-omega:

f(n)=o(g(n) means that for large n, function f is smaller
than any constant fraction of g

f(n)=(g(n) means that for large n, function f is larger than
any constant multiple of g, i.e., g=o(f(n))

Example: )(5),(5 22 nnnon 

Asymptotic Notations Summary

Notation Ratio f(n)/g(n) for large n

0)(/)())(()(

)(/)())(()(

)(/)())(()(

)(/)())(()(

)(/)())(()(

21











ngnfngonf

cngnfngOnf

cngnfcngnf

ngnfcngnf

ngnfngnf 

Examples
• For every polynomial p(n) of degree d with a positive

leading term, p(n)=(nd). For example, 4n³-3n+7=(n³)

• Every constant is O(1), for example 1000=O(1)

• logan = logcn) for any two constant bases a, c,
For example log10n = log2n)

• Every constant power of logn is little-oh of every
constant root of n, for example

• Every power of n is little-oh of every exponential function
of n, for example

10 5(log ) ( )n o n

10 (1.1 )nn o

Some common functions

2 3

2 3

log log log

log

2 3 !n n

n n n

n n n n n n

n

  

     

  


(polynomial)

(exponential)

Representation of Problems
• Input, output represented as strings over a finite alphabet A
• eg. A={0,1}, ASCI characters, …
• Size of input x =|x|, the length of string x
• Recoding one alphabet into another changes the size

linearly (alphabets fixed, finite) as long as |A| ≥2.

• Encode inputs so that we can easily determine which strings
are valid encodings and can decode them

• Complexity may depend on the representation of inputs (for
example graph represented by adjacency matrix vs.
adjacency lists), but reasonable encodings intertraslatable in
polynomial time

Examples
• Numbers can be represented in decimal or more usually in binary

notation 5 = 101
– size(N)  logN:
– exponential difference between size and value of a number

• Graph given by its adjacency matrix: can list it eg. by rows with a
separator between rows

1

2
3

4
0 1 0 1
0 0 1 0
0 0 0 1
0 1 0 0

0101;0010;0001;0100

Graph
Adjacency matrix

Input string over A={0,1,;}

• Graph given by its list of nodes and edges:
1,2,3,4#(1,2),(1,4),(2,3),(3,4),(4,1) over A={0,…9, (,),#,”,”}
or with node id’s in binary: 1,10,11,100#(1,10),(1,100),(10,11),(11,100),(100,1)