CS代写 COMP0017: Computability and Complexity Part (II): Complexity

COMP0017: Computability and Complexity Part (II): Complexity
Slides for Lecture 16

COMP0017: Computability and Complexity Part (II): Complexity

Copyright By PowCoder代写 加微信 powcoder

Slides for Lecture 16

COMP0017: Computability and Complexity Part (II): Complexity
Slides for Lecture 16
􏰆 Garey and Johnson, “Computers and Intractability”, Freeman, 1979.

COMP0017: Computability and Complexity Part (II): Complexity
Slides for Lecture 16
􏰆 Garey and Johnson, “Computers and Intractability”, Freeman,
􏰆 , “The Complexity of Theorem Proving Procedures”, 1971.

COMP0017: Computability and Complexity Part (II): Complexity
Slides for Lecture 16
􏰆 Garey and Johnson, “Computers and Intractability”, Freeman,
􏰆 , “The Complexity of Theorem Proving Procedures”, 1971.
􏰆 Hopcroft, Motwani and Ullman, “Introduction to automata, languages and computation”, Addison-Wesley, 2nd Ed., 2001.

COMP0017: Computability and Complexity Part (II): Complexity
Slides for Lecture 16
􏰆 Garey and Johnson, “Computers and Intractability”, Freeman,
􏰆 , “The Complexity of Theorem Proving Procedures”, 1971.
􏰆 Hopcroft, Motwani and Ullman, “Introduction to automata, languages and computation”, Addison-Wesley, 2nd Ed., 2001.
􏰆 Papadimitriou, “Computational complexity”, Addison-Wesley, 1994.

COMP0017: Computability and Complexity Part (II): Complexity
Slides for Lecture 16
􏰆 Garey and Johnson, “Computers and Intractability”, Freeman,
􏰆 , “The Complexity of Theorem Proving Procedures”, 1971.
􏰆 Hopcroft, Motwani and Ullman, “Introduction to automata, languages and computation”, Addison-Wesley, 2nd Ed., 2001.
􏰆 Papadimitriou, “Computational complexity”, Addison-Wesley, 1994.
􏰆 Sipser, “Introduction to the theory of computation”, MIT, 2006.

􏰆 Some problems are only solvable by algorithms that search through the space of all possible options.

􏰆 Some problems are only solvable by algorithms that search through the space of all possible options.
􏰆 For instance, finding an algorithm that can tell you if any possible set of specifications of a blackbox system is met and constructing a design that meets them is of this type.

􏰆 Some problems are only solvable by algorithms that search through the space of all possible options.
􏰆 For instance, finding an algorithm that can tell you if any possible set of specifications of a blackbox system is met and constructing a design that meets them is of this type.
􏰆 Solving this problem is only possible by searching through the space of all possible designs.

Intractability

Intractability
􏰆 These problems are referred to as intractable.

Intractability
􏰆 These problems are referred to as intractable.
􏰆 Proving that a problem is intractable is as hard as finding an efficient algorithm to solve it.

Intractability
􏰆 These problems are referred to as intractable.
􏰆 Proving that a problem is intractable is as hard as finding an
efficient algorithm to solve it.
􏰆 Theory of NP-Completeness, or Complexity, as we call it in this module, provides straightforwards techniques for proving that a given problem is intractable.

Intractability
􏰆 These problems are referred to as intractable.
􏰆 Proving that a problem is intractable is as hard as finding an
efficient algorithm to solve it.
􏰆 Theory of NP-Completeness, or Complexity, as we call it in this module, provides straightforwards techniques for proving that a given problem is intractable.
􏰆 Following Cook’s work, this is by reducibility. That is, by showing that a problem is just as hard a large number of other problems that are widely recognised as being intractable and have kept experts busy for years.

Intractability
􏰆 These problems are referred to as intractable.
􏰆 Proving that a problem is intractable is as hard as finding an
efficient algorithm to solve it.
􏰆 Theory of NP-Completeness, or Complexity, as we call it in this module, provides straightforwards techniques for proving that a given problem is intractable.
􏰆 Following Cook’s work, this is by reducibility. That is, by showing that a problem is just as hard a large number of other problems that are widely recognised as being intractable and have kept experts busy for years.
􏰆 Theory of NP-Completeness also teaches us how to solve the difficult problem at hand by finding an efficient algorithm for various special cases of it, run quickly most of the time, meet most of the requirements etc.

Examples of Intractable Problems
􏰆 Graph coloring: How many colors do you need to color a graph such that no two adjacent vertices have the same color?
􏰆 Bin packing: How many bins of a given size do you need to hold n items of variable size?
􏰆 SAT: The satisfiability problem to test whether a given Boolean formula is satisfiable.
􏰆 SubGraph: Does a given graph G contain a complete subgraph on a given number k of vertices?
􏰆 Subset-Sum or Integer partition: Can you partition n integers into two subsets such that the sums of the subsets are equal?

Which one of these do you think is intractable?
Vote at https://www.menti.com/xxkwmo95zr
(or go to www.menti.com and use the code 4001482)
􏰆 Is n prime?
􏰆 Is graph G connected?
􏰆 Find shortest path from x to y in weighted graph G
􏰆 Given regexp r and string s, is s ∈ L(r)?
􏰆 Given regexps r,s is L(r) = L(s)?
􏰆 Travelling Salesman Problem
􏰆 Does program P halt?
􏰆 From position P, which player has a winning strategy in generalised chess?

􏰆 A problem is a general question with a set of parameters with unspecified values, which needs to be answered.

Formalising
􏰆 A problem is a general question with a set of parameters with unspecified values, which needs to be answered.
􏰆 A problem is described by its of parameters and what properties should the solution satisfy,

Formalising
􏰆 A problem is a general question with a set of parameters with unspecified values, which needs to be answered.
􏰆 A problem is described by its of parameters and what properties should the solution satisfy,
􏰆 An instance of a problem is obtained by particular values for all of the parameters of the problem.

Travelling Salesman Problem
􏰆 A set of cities {c1, c2, · · · , cm}, nodes or vertices of a graph, andforeachpairofcitiesci,cj adistanced(ci,cj).
􏰆 A solution to this problem is an ordering of the cities
⟨cπ(1), cπ(2), · · · , cπ(m)⟩ that each city only occurs once in the ordering and the ordering minimises the value
􏰉 d(cπ(i), cπ(i+1)) + d(cπ(m), cπ(1)) i=1

An Instance of TSP
An instance to this problem illustrated below
is given by C = {c1, c2, c3, c4}, and distances
d(c1,c2) = 10,d(c1,c3) = 5,d(c1,c4) = 9,d(c2,c3) = 6,d(c2,c4) = 9,d( What is a solution for this instance?

An Instance of TSP
An instance to this problem illustrated below
is given by C = {c1, c2, c3, c4}, and distances
d(c1,c2) = 10,d(c1,c3) = 5,d(c1,c4) = 9,d(c2,c3) = 6,d(c2,c4) = 9,d( What is a solution for this instance?
⟨c1, c2, c4, c3⟩

􏰆 Algorithms are step by step procedures for solving problems.

Formalising
􏰆 Algorithms are step by step procedures for solving problems.
􏰆 We think of these procedures as instructions to a Turing Machine TM.

Formalising
􏰆 Algorithms are step by step procedures for solving problems.
􏰆 We think of these procedures as instructions to a Turing
Machine TM.
􏰆 We are interested in the most efficient algorithm that can solve a problem.

Formalising
􏰆 Algorithms are step by step procedures for solving problems.
􏰆 We think of these procedures as instructions to a Turing
Machine TM.
􏰆 We are interested in the most efficient algorithm that can solve a problem.
􏰆 Efficiency involves all the various resources needed for executing an algorithm. In this course, however, we are only interested in fastest algorithms. So most efficient means fastest for us.

Formalising
􏰆 Algorithms are step by step procedures for solving problems.
􏰆 We think of these procedures as instructions to a Turing
Machine TM.
􏰆 We are interested in the most efficient algorithm that can solve a problem.
􏰆 Efficiency involves all the various resources needed for executing an algorithm. In this course, however, we are only interested in fastest algorithms. So most efficient means fastest for us.
􏰆 Time requirements are expressed in terms of a single variable, namely the ”size” of a problem instance, i.e. the amount of input data needed to describe that instance.

It is tricky!

It is tricky!
􏰆 The tricky point is that the size of input may not be as simple
as it seems. For instance for the TSP, it might seem that the
size of input is the number of cities of an instance, but we
also have the collection of m(m−1) distances and the numbers 2
that describe these distances.

It is tricky!
􏰆 The tricky point is that the size of input may not be as simple
as it seems. For instance for the TSP, it might seem that the
size of input is the number of cities of an instance, but we
also have the collection of m(m−1) distances and the numbers 2
that describe these distances.
􏰆 This challenge is overcome by describing an instance of a problem as a single finite string of symbols chosen from a finite alphabet. This scheme is called an encoding scheme.

It is tricky!
􏰆 The tricky point is that the size of input may not be as simple
as it seems. For instance for the TSP, it might seem that the
size of input is the number of cities of an instance, but we
also have the collection of m(m−1) distances and the numbers 2
that describe these distances.
􏰆 This challenge is overcome by describing an instance of a problem as a single finite string of symbols chosen from a finite alphabet. This scheme is called an encoding scheme.
􏰆 The input length of an instance of a problem is the number of symbols in its description.

As an example, instances of the TSP can be described using the alphabet
{c,[,],/,0,1,··· ,9}
with our previous example being encode in the string
c [1]c [2]c [3]c [4]//10/5/9//6/9//3 The input length for this instance is

As an example, instances of the TSP can be described using the alphabet
{c,[,],/,0,1,··· ,9}
with our previous example being encode in the string
c [1]c [2]c [3]c [4]//10/5/9//6/9//3 The input length for this instance is

Definition: Time Complexity

Definition: Time Complexity
􏰆 It is now time for us to define the notion of time complexity!

Definition: Time Complexity
􏰆 It is now time for us to define the notion of time complexity! 􏰆 What if there are different encoding schemes?

Definition: Time Complexity
􏰆 It is now time for us to define the notion of time complexity!
􏰆 What if there are different encoding schemes?
􏰆 We really have to define the time complexity function after fixing an encoding scheme, but for this course we fix that to be encoding in a TM.

Definition: Time Complexity
􏰆 It is now time for us to define the notion of time complexity!
􏰆 What if there are different encoding schemes?
􏰆 We really have to define the time complexity function after fixing an encoding scheme, but for this course we fix that to be encoding in a TM.
􏰆 In fact, it turns out that the choice of encoding scheme in terms of a specific computing model does not make much difference.

Definition: Time Complexity of a TM
The time complexity function for an algorithm expresses its time requirements by giving for each possible input length the largest possible amount of time needed by the algorithm to solve a problem instance of that size on a TM.
LetT beaTM.Letf :N→Nbeafunctionsuchthat
􏰆 ifinputtapeofT has≤nsymbolsthenT musthaltin
≤ f (n) steps.
f (n) is upper bound for time complexity of T .
If T does not halt on some inputs then time complexity is undefined (or infinite).

Polynomial Time Complexity

Polynomial Time Complexity
􏰆 A Turing machine T runs in polynomial time (p-time) if its complexity is bound by p(n), for some polynomial p.

Polynomial Time Complexity
􏰆 A Turing machine T runs in polynomial time (p-time) if its complexity is bound by p(n), for some polynomial p.
􏰆 An algorithm has p-time complexity if it can be implemented by a deterministic TM that runs in p-time.

Polynomial Time Complexity
􏰆 A Turing machine T runs in polynomial time (p-time) if its complexity is bound by p(n), for some polynomial p.
􏰆 An algorithm has p-time complexity if it can be implemented by a deterministic TM that runs in p-time.
􏰆 A problem is in class-P if it can be solved by a p-time algorithm.

Polynomial Time Complexity
􏰆 A Turing machine T runs in polynomial time (p-time) if its complexity is bound by p(n), for some polynomial p.
􏰆 An algorithm has p-time complexity if it can be implemented by a deterministic TM that runs in p-time.
􏰆 A problem is in class-P if it can be solved by a p-time algorithm.
These problems are called tractable.

Polynomials
Example In general
x3 − 3×2 + 7x + 4
p(n)=aknk +ak−1nk−1 +…+a2n2 +a1n+a0 √n, n1, 2n, logn, nlogn

In order terms, rank f (n) from highest to lowest
Vote on https://www.menti.com/eyv7fzzt3a
􏰆 n10 􏰆 √n 􏰆 2n 􏰆 n!
􏰆 2 (log2n)
(or go to www.menti.com and use the code 2725 1039)

In order terms, rank f (n) from highest to lowest
√√logn 10nn2 n≤2 2 ≤n≤n≤2≤2≤n!
2 logn ≤2logn =n

Polynomial vs Polynomial Time Complexity
constant logarithmic linear n-log-n quadratic cubic exponential factorial super-exponential
time complexity O (1)
O (logn) O(n)
O (nlogn) O(n2) O(n3)
O(kn), e.g. O(2n) O (n!)
e.g. O(nn)

Polynomial vs Polynomial Time Complexity
Polynomial
constant logarithmic linear n-log-n quadratic cubic exponential factorial super-exponential
time complexity O (n)
O (logn) O(n)
O (nlogn) O(n2) O(n3)
O(kn), e.g. O(2n) O (n!)
e.g. O(nn)
Exponential

Polynomial vs Polynomial Time Complexity
Polynomial
Functions bounded from above
by a polynomial. Exponential the
constant logarithmic linear n-log-n quadratic cubic exponential factorial
time complexity O (n)
O (logn) O(n) O(nlogn) O(n2) O(n3)
O(kn), e.g. O(2n) O (n!)
e.g. O(nn)
remaining functions. super-exponential

Polynomial vs Polynomial Time Complexity
constant logarithmic linear n-log-n quadratic cubic exponential factorial
time complexity O (n) O(logn) O(n)
O (nlogn) O(n2) O(n3)
O(kn), e.g. O(2n) O(n!)
Intractable
Class NP ?

Polynomial vs Polynomial Time Complexity
constant logarithmic linear n-log-n quadratic cubic exponential factorial super-exponential
time complexity O (n) O(logn) O(n)
O (nlogn) O(n2) O(n3)
O(kn), e.g. O(2n) O(n!)
e.g. O(nn)
Intractable
Class NP ? Tricky!

Note that the terminology can get a bit confusing:

Note that the terminology can get a bit confusing:
􏰆 A function such as log n or n log n which is not polynomial is now referred to as polynomial!

Note that the terminology can get a bit confusing:
􏰆 A function such as log n or n log n which is not polynomial is now referred to as polynomial!
􏰆 Some problems are genuinely exponential, in that they have a an exponential time complexity of e.g. O(25).

Note that the terminology can get a bit confusing:
􏰆 A function such as log n or n log n which is not polynomial is now referred to as polynomial!
􏰆 Some problems are genuinely exponential, in that they have a an exponential time complexity of e.g. O(25).
􏰆 A problem with a super-exponential time complexity such as O(nk) is called exponential.

The Graph of Growth

Rate of Growth

Faster Computers

Subtleties
􏰆 Exceptions to difference between efficient polynomial time and inefficient exponential time, e.g. 2n is faster than n5 for

Subtleties
􏰆 Exceptions to difference between efficient polynomial time and inefficient exponential time, e.g. 2n is faster than n5 for

Subtleties
􏰆 Exceptions to difference between efficient polynomial time and inefficient exponential time, e.g. 2n is faster than n5 for
􏰆 Some exponential time algorithms are quite useful in practice!

Subtleties
􏰆 Exceptions to difference between efficient polynomial time and inefficient exponential time, e.g. 2n is faster than n5 for
􏰆 Some exponential time algorithms are quite useful in practice! They have a better average case complexity.

Subtleties
􏰆 Exceptions to difference between efficient polynomial time and inefficient exponential time, e.g. 2n is faster than n5 for
􏰆 Some exponential time algorithms are quite useful in practice! They have a better average case complexity.
An example is the simplex algorithm of linear programming. This algorithm has an exp time complexity, but in practice runs impressively quickly.

Subtleties
􏰆 Exceptions to difference between efficient polynomial time and inefficient exponential time, e.g. 2n is faster than n5 for
􏰆 Some exponential time algorithms are quite useful in practice! They have a better average case complexity.
An example is the simplex algorithm of linear programming. This algorithm has an exp time complexity, but in practice runs impressively quickly.
Another example is the branch-and-bound problem for the knapsack problem.

Subtleties
􏰆 Exceptions to difference between efficient polynomial time and inefficient exponential time, e.g. 2n is faster than n5 for
􏰆 Some exponential time algorithms are quite useful in practice! They have a better average case complexity.
An example is the simplex algorithm of linear programming. This algorithm has an exp time complexity, but in practice runs impressively quickly.
Another example is the branch-and-bound problem for the knapsack problem.
􏰆 Some problems are provably intractable and the first one of these is Turing’s halting problem.

Intractable Problems
Intractable problems that we consider are those that are so hard that an exponential amount of time is needed to solve them.

Intractable Problems
Intractable problems that we consider are those that are so hard that an exponential amount of time is needed to solve them. We have two categories of them:
􏰆 undecidable problems 􏰆 decidable problems

Undecidable Intractable Problems

Undecidable Intractable Problems
􏰆 The earliest intractable problem was Turing’s halting problem. More generally, Turing showed that there existed problems that are undecidable, no algorithm can ever solve them.

Undecidable Intractable Problems
􏰆 The earliest intractable problem was Turing’s halting problem. More generally, Turing showed that there existed problems that are undecidable, no algorithm can ever solve them.
􏰆 Other problems of this sort are Hilbert’s tenth problem: solvability of polynomial equations in integers.

Undecidable Intractable Problems
􏰆 The earliest intractable problem was Turing’s halting problem. More generally, Turing showed that there existed problems that are undecidable, no algorithm can ever solve them.
􏰆 Other problems of this sort are Hilbert’s tenth problem: solvability of polynomial equations in integers.
􏰆 Various plane tiling problem.

Decidable Intractable Problems

Decidable Intractable Problems
􏰆 automata and formal language and mathematical logic problems.

Decidable Intractable Problems
􏰆 automata and formal language and mathematical logic problems.
􏰆 people who found these problems proved that they cannot be solved in polynomial time even by a non deterministic TM, i.e. a TM that can run an unbounded number of computations in parallel.

Decidable Intractable Problems
􏰆 automata and formal language and mathematical logic problems.
􏰆 people who found these problems proved that they cannot be solved in polynomial time even by a non deterministic TM,

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com