PowerPoint Presentation
Jeff Edmonds
York University
COSC 2011
Lecture 4
Asymptotic Analysis of Time Complexity
History of Classifying Problems
Growth Rates
Time Complexity
Linear vs Constant Time
Binary Search Time (logn)
Insertion Sort Time (Quadratic)
Don’t Redo Work
Test (linear) vs Search (Exponential)
Multiplying (Quadratic vs Exponential)
Bits of Input
Cryptography
Amortized Time Complexity
Worst Case Input
Classifying Functions (BigOh)
Adding Made Easy
Logs and Exponentials
Understand Quantifiers
‹#›
1
Some Math
Time Complexity
t(n) = Q(n2)
Input Size
Time
Classifying Functions
f(i) = nQ(n)
Logs and Exps
2a × 2b = 2a+b
2log n = n
Adding Made Easy
∑i=1 f(i).
Logic Quantifiers
g “b Loves(b,g)
“b g Loves(b,g)
Recurrence Relations
T(n) = a T(n/b) + f(n)
‹#›
2
The Time Complexity of an Algorithm
Specifies how the running time depends on the size of the input.
Goodrich & Tamassia, Chapter 4
‹#›
3
Specifies how the running time
depends on the size of the input.
Purposes:
To estimate how long a program will run.
To estimate the largest input that can reasonably be given to the program.
To compare the efficiency of different algorithms.
To help focus on the parts of code that are executed the largest number of times.
To choose an algorithm for an application.
The Time Complexity of an Algorithm
‹#›
4
Specifies how the running time
depends on the size of the input.
The Time Complexity of an Algorithm
“size” of input
“time” T(n) executed .
Work for me to
give you the instance.
Work for you to
solve it.
A function mapping
Work for you to
read the instance.
‹#›
5
History of Classifying Problems
Known
Unknown
Euclid (300 BC)
GCD
History of Classifying Problems
Known
GCD
Computable
Halting
Turing 1936
History of Classifying Problems
Computable
Known
GCD
Halting
Exp
Jack Edmonds 1965
Poly
History of Classifying Problems
Computable
Exp = 2n
Poly = nc
Quadratic = n2
nlogn
log n
Fast sorting
Look at input
Slow sorting
Considered Feasible
Brute Force
(Infeasible)
Mathematicians’ dream
Constant
Time does not grow with input.
Linear = n
Binary Search
Halting
Impossible
Quadratic = n2
log n
Look at input
Slow sorting
Brute Force
(Infeasible)
Constant
Time does not grow with input.
Linear = n
Binary Search
input size
t
ime
Growth Rates
Exp = 2n
5
log n
n
n2
2n
Growth Rates
5
5
5
5
5
log n
3
6
9
13
amoeba
n1/2
3
10
31
100
bird
10
100
1,000
10,000
human
n log n
30
600
9,000
130,000
my father
n2
100
10,000
106
108
elephant
n3
1,000
106
109
1012
dinosaur
2n
1,024
1030
10300
103000
the universe
Note: The universe contains approximately 1050 particles.
n
T(n)
10
100
1,000
10,000
atom
11
Growth Rates
5
T
T
T
log n
T
T+1/n
T+1
n1/2
T
T
T+1
2T
n log n
T
n2
T
T+2n
4T
n3
T
T+3n2
8T
2n
T
2T
T2
n
T(n)
n
n+1
2n
=
=
=
=
12
input size
t
ime
Growth Rates
Let input size = n = 60
Time 260
= (210)6
= (1024)6
(103)6
= 1018
= Age of universe in seconds
(Infeasible)
Exp = 2n
input size
t
ime
Growth Rates
Let input size = n = 260
Time = log n
= 1018 = size of universe
= 60
Time grows with input size,
barely!
5
log n
input size
t
ime
Growth Rates
n
nc
Feasible
n2
100
10,000
106
108
elephant
n3
1,000
106
109
1012
dinosaur
n4
104
108
1012
1016
manageable
T(n)
10
100
1,000
10,000
Log Scale
Moore’s law: Approximately every two years
in a fixed cost computer
the following have doubled
# of transistors, memory, speed
the size of a typical input
# ops in typical computation
Log Scale
Happiness law:
Happiness goes up by one
when resources double.
I have
$billon.
I would be happy if I had another
$billon.
I have
$million.
I would be happy if I had another
$million.
I have $thousand.
I would be happy if I had another
$thousand.
I have
$10.
I would be happy if I had another
$10.
happiness = log( resources )
resources = 2happiness
Plotted on a log-log chart,
All polynomials look linear.
Exponentials are still exponential.
Slope = 1
Slope = 2
Slope = 3
Reasonable Input Size
Growth Rates
2n
Reasonable Running Time
With Moore’s law
and happiness law
Specifies how the running time
depends on the size of the input.
The Time Complexity of an Algorithm
Definition of Time:
# of seconds (machine dependent).
# lines of code executed.
# of times a specific operation is performed (e.g., addition).
Which?
These are all reasonable definitions of time, because they are within a constant factor of
each other.
19
Specifies how the running time
depends on the size of the input.
The Time Complexity of an Algorithm
Definition of Size:
# bits to write down input.
Denoted n.
n = # of elements
Small Cheat:
14,23,25,30,31,52,62,79,88,98
10
It may takes O(log n) bits
to represent each element.
20
Experimental Studies
Write a program implementing the algorithm
Run the program with inputs of varying size and composition
Use a method like System.currentTimeMillis() to get an accurate measure of the actual running time
Plot the results
Theoretical Analysis
Uses a high-level description of the algorithm instead of an implementation
Characterizes running time as a function of the input size, n.
Takes into account all possible inputs
Allows us to evaluate the speed of an algorithm independent of the hardware/software environment
Theoretical Analysis
Suppose we theoretically determine
our algorithm has running time
T(n) = O(n2) = cn2 +an+b
The lower order terms an+b
don’t matter for big n.
Theoretical Analysis
Suppose we theoretically determine
our algorithm has running time
T(n) = O(n2) = cn2
We don’t know c, because we did not consider:
Which implementation of abstract algorithm
Whether 7 or 9 lines of code in inner loop.
Exactly which operations we count.
The running time of the computer
Theoretical Analysis
Suppose we theoretically determine
our algorithm has running time
T(n) = O(n2) = cn2
Determine c, experimentally
Running it on input of size n=103 takes T=10-3 sec.
T(103) = c(103)2 = c106 = 10-3
c = 10-9
Use this c theoretically.
On input of size n=106, time is
T(106) = c(106)2 = 103 sec = 17 minutes.
Given 1 hr, can handle input size
n = (T/c)1/2 = (602 /c)1/2 = 2×106
25
Search:
Input: A linked list.
Output: Find the end.
Alg: Walk there.
Time
Insert Front:
Input: A linked list.
Output: Add record to front.
Alg: Play with pointers.
Time
# of records = n.
= 4
Linear vs Constant Time
Time
Time
# of records = n.
= 4
Linear vs Constant Time
= linear time = O(n)
Big Oh O( )
means ignore the multiplicative constant
2n and 10000n are both fine.
Time
= 4
Linear vs Constant Time
= Constant time = O(1)
Time does not “depend” on input.
2 and 10000 are both fine.
Big Oh O( )
means ignore the multiplicative constant
Time
= 4
Linear vs Constant Time
= Constant time = O(1)
Time does not “depend” on input.
a Java Program J, an integer k,
” inputs I, Time(J,I) ≤ k
I write a Java Program J
and allow it some k time steps.
Ha. You can only use a small fixed amount like 4.
Understand
Quantifiers!
No. This can be as big as I like.
eg k = 1,000,000,000,000,000!
Time
= 4
Linear vs Constant Time
= Constant time = O(1)
Time does not “depend” on input.
a Java Program J, an integer k,
” inputs I, Time(J,I) ≤ k
I write a Java Program J
and allow it some k time steps.
If you can use any amount of time, this seems neither fair nor constant!
Understand
Quantifiers!
No. This can be as big as I like.
eg k = 1,000,000,000,000,000!
Time
= 4
Linear vs Constant Time
= Constant time = O(1)
Time does not “depend” on input.
a Java Program J, an integer k,
” inputs I, Time(J,I) ≤ k
I write a Java Program J
and allow it some k time steps.
Constant? Oh! In our quantifiers game,
you must first fix k. Then knowing it,
I get to choose as big of an input
as I want!
Understand
Quantifiers!
No. This can be as big as I like.
eg k = 1,000,000,000,000,000!
Time
= 4
Linear vs Constant Time
= Constant time = O(1)
Time does not “depend” on input.
a Java Program J, an integer k,
” inputs I, Time(J,I) ≤ k
I write a Java Program J
and allow it some k time steps.
Understand
Quantifiers!
No. This can be as big as I like.
eg k = 1,000,000,000,000,000!
If you uses more time,
I will give you a bigger input I.
Hee Hee Hee
J must still solve the problem.
Time
= 4
Linear vs Constant Time
= Constant time = O(1)
Time does not “depend” on input.
a Java Program J, an integer k,
” inputs I, Time(J,I) ≤ k
Is this “Constant time”
= O(1)?
Time
n
Yes because bounded
by a constant
Take a sorted list.
Each O(1) time cut it in half.
The sub-lists are of size n, n/2, n/4, n/8,…,1
Time = O(log n)
= # bits to write n in binary
key 25
3 5 6 13 18 21 21 25 36 43 49 51 53 60 72 74 83 88 91 95
If key ≤ mid,
then key is in
left half.
If key > mid,
then key is in
right half.
Logarithmic Time
(Binary Search Time)
34
Take a list.
Each O(1) time cut it in half.
The sub-lists are of size n, n/2, n/4, n/8,…,1
Time = O(log n)
= # bits to write n in binary
= time to walk from root to leaves of a balanced binary tree with n nodes.
38
25
17
4
21
31
28
35
51
42
40
49
63
55
71
Logarithmic Time
(Binary Search Time)
35
Looking at Input: Linear time
Binary Search: log(n) time
Touch Input: Constant time
For big inputs,
100000 log n << 0.000001 n2
100000 << 0.000001 log n
input size
t
ime
36
Quadratic Time
(Insertion Sort)
88
14
98
25
62
52
79
30
23
31
Loop Invariant:
Some are sorted
The rest are to the side.
Maintain LI while making progress
Insert some number
37
88
14
98
25
62
52
79
30
23
Loop Invariant:
Some are sorted
The rest are to the side.
Maintain LI while making progress
Insert some number
31
Quadratic Time
(Insertion Sort)
38
14
98
25
62
52
79
30
23
Loop Invariant:
Some are sorted
The rest are to the side.
Maintain LI while making progress
Insert some number
31 88
Quadratic Time
(Insertion Sort)
39
14
98
62
52
79
30
23
Loop Invariant:
Some are sorted
The rest are to the side.
Maintain LI while making progress
Insert some number
31 25 88
Quadratic Time
(Insertion Sort)
40
14
98
62
79
30
23
Loop Invariant:
Some are sorted
The rest are to the side.
Maintain LI while making progress
Insert some number
31 25 52 88
Quadratic Time
(Insertion Sort)
41
14
62
79
30
23
Loop Invariant:
Some are sorted
The rest are to the side.
Maintain LI while making progress
Insert some number
31 25 52 88 98
Quadratic Time
(Insertion Sort)
42
14
79
30
23
Loop Invariant:
Some are sorted
The rest are to the side.
Maintain LI while making progress
Insert some number
31 25 52 62 88 98
Quadratic Time
(Insertion Sort)
43
79
30
23
Loop Invariant:
Some are sorted
The rest are to the side.
Maintain LI while making progress
Insert some number
14 31 25 52 62 88 98
Time to insert i.
Total time
1 + 2 + 3 + 4 + … + n
i=1..n i
Quadratic Time
(Insertion Sort)
44
1 2 . . . . . . . . n
= number of white dots
1
+
2
+
3
+
. . .
+
n-1
+
n
=
S
Quadratic Time
(Insertion Sort)
45
1
+
2
+
3
+
. . .
+
n-1
+
n
=
S
n
+
n-1
+
n-2
+
. . .
+
2
+
1
=
S
1 2 . . . . . . . . n
= number of yellow dots
n . . . . . . . 2 1
= number of white dots
Quadratic Time
(Insertion Sort)
46
1
+
2
+
3
+
. . .
+
n-1
+
n
=
S
n
+
n-1
+
n-2
+
. . .
+
2
+
1
=
S
= number of white dots
= number of yellow dots
n+1 n+1 n+1 n+1 n+1
n
n
n
n
n
n
= n(n+1) dots in the grid
2S dots
S =
n (n+1)
2
Quadratic Time
(Insertion Sort)
47
Looking at Input: Linear time
Insertion Sort: Quadratic time
For big inputs,
100000 n << 0.000001 n2
input size
t
ime
‹#›
48
Don't Redo Work
Prefix Averages:
Input: an array X[i] of n values
Output: The average of each prefix
A[i] = (X[1] + X[2] + … + X[i])/i
for i = 1 to n do
% compute A[i]
s = 0
for j = 1 to i do
s = s + X[j]
A[i] = s / i
return A
Which line works the hardest?
# times it is executed
=1+2+3+….+n
= O(n2)
The other lines effect this time
by a multiplicative constant
or less
Time(n) = O(n2)
n n n n n
n
n
n
n
n
n
X 21 23 25 31 20 18 16 A 21 22 23 25 24 23 22
Don't Redo Work
Prefix Averages:
Input: an array X[i] of n values
Output: The average of each prefix
A[i] = (X[1] + X[2] + … + X[i])/i
Make “progress”
Maintain second LI
Maintain first
i=0; s = 0;
loop
% LI:
% LI:
exit when i = n
i = i+1
s = s + X[i]
A[i] = s / i
return A
A[1], … , A[i] computed
s = X[1] + X[2] + … + X[i]
Establish LI
Exit Cond + LI Post Cond
Don't Redo Work
Prefix Averages:
Input: an array X[i] of n values
Output: The average of each prefix
A[i] = (X[1] + X[2] + … + X[i])/i
i=0; s = 0;
loop
% LI:
% LI:
exit when i = n
i = i+1
s = s + X[i]
A[i] = s / i
return A
A[1], … , A[i] computed
s = X[1] + X[2] + … + X[i]
Which line works the hardest?
# times it is executed
= n
The other lines effect this time
by a multiplicative constant
or less
Time(n) = O(n)
(Linear is better than quadratic.)
This is a circuit with
AND/OR/NOT gates
with n input variables/wires
and one output wire.
This is an assignment giving
true/false, yes/no, 0/1
to each input variable.
These values percolate
down to the output wire.
F
T
F
x3
x2
x1
OR
OR
AND
AND
OR
NOT
F
F
F
F
F
T
Test vs Search
Test/Evaluate:
Input: Circuit & Assignment
Output: Value at output.
Alg: Let values percolate down.
Time:
Search/Satisfiablity:
Input: Circuit
Output: An assignment
giving true:
Alg: Try all assignments.
(Brute Force)
Time:
Test vs Search
F
T
F
x3
x2
x1
OR
OR
AND
AND
OR
NOT
F
F
F
F
F
T
# of gates.
2n
Time 260 = 1018
= Age of universe in seconds
Test vs Search
Let n = 60
2n
Evaluate Circuit: Linear time
For big inputs, n << n2 << 2n
input size
t
ime
Satisfiablity: Exponential time
‹#›
55
How to add 2 n-bit numbers.
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
+
Tom Lehrer
New Math
‹#›
56
How to add 2 n-bit numbers.
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
+
‹#›
57
How to add 2 n-bit numbers.
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
+
‹#›
58
How to add 2 n-bit numbers.
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
+
‹#›
59
How to add 2 n-bit numbers.
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
+
‹#›
60
How to add 2 n-bit numbers.
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
+
*
*
‹#›
61
Time complexity of
grade school addition
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
+
*
*
*
*
*
*
T(n) = The amount of time grade school addition uses to add two n-bit numbers
= θ(n) = linear time.
On any reasonable computer adding 3 bits can be done in constant time.
‹#›
62
QUESTION: Is there an algorithm to add two n-digit numbers whose time grows sub-linearly in n?
No! You must read the input
to know the answer.
And this takes time n!
How to add 2 n-digit numbers.
Running Time
‹#›
63
So any algorithm for addition must use time at least linear in the size of the numbers.
Grade school addition is essentially as good as it can be.
‹#›
64
How to multiply 2 n-bit numbers.
X
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * * * * * * * * * *
n2
‹#›
65
Neat! We have demonstrated that as things scale multiplication is a harder problem than addition.
Grade School Addition: Linear time
Grade School Multiplication: Quadratic time
For big inputs,
100000 n << 0.000001 n2
input size
t
ime
‹#›
66
Grade School Addition: Linear time
Grade School Multiplication: Quadratic time
Don’t jump to conclusions!
We compared two algorithms.
We did not prove that
no possible multiplication algorithm
runs in linear time.
Neat! We have demonstrated that as things scale multiplication is a harder problem than addition.
Is there a clever algorithm to multiply two numbers in less than quadratic time?
‹#›
67
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * * * * * * * * * *
n2
Grade School vs Kindergarten
a × b = a + a + a + ... + a
b
Running Time
T(n) = Time multiply
= θ(b) = linear time.
T(n) = Time multiply
= θ(n2) = quadratic time.
Which is faster?
92834765225674897 × 83883977590110394875
Which would you use?
Minutes?
Add a digit?
9
Exponential?
Centuries?
‹#›
68
Specifies how the running time
depends on the size of the input.
The Time Complexity of an Algorithm
“size” of input
“time” T(n) executed .
Work for me to
give you the instance.
Work for you to
solve it.
A function mapping
Work for you to
read the instance.
‹#›
69
83920
Size of Input Instance
‹#›
70
Size of Input Instance
Size of paper
# of bits
# of digits
Value
- n = 2 in2
- n = 17 bits
- n = 5 digits
- n = 83920
2’’
83920
5
1’’
Which are reasonable?
‹#›
71
Size of Input Instance
Size of paper
# of bits
# of digits
Value
- n = 2 in2
- n = 17 bits
- n = 5 digits
- n = 83920
Intuitive
2’’
83920
5
1’’
‹#›
72
Size of Input Instance
Size of paper
# of bits
# of digits
Value
- n = 2 in2
- n = 17 bits
- n = 5 digits
- n = 83920
Intuitive
Formal
2’’
83920
5
1’’
‹#›
73
Size of Input Instance
Size of paper
# of bits
# of digits
Value
- n = 2 in2
- n = 17 bits
- n = 5 digits
- n = 83920
Intuitive
Formal
Reasonable
# of bits =
3.32 * # of digits
2’’
83920
5
1’’
‹#›
74
Size of Input Instance
Size of paper
# of bits
# of digits
Value
- n = 2 in2
- n = 17 bits
- n = 5 digits
- n = 83920
Intuitive
Formal
Reasonable
Unreasonable
# of bits = log2(Value) Value = 2# of bits
2’’
83920
5
1’’
‹#›
75
Specifies how the running time
depends on the size of the input.
The Time Complexity of an Algorithm
“size” of input
“time” T(n) executed .
Work for you to
read the instance.
Work for you to
solve it.
A function mapping
Work for me to
give you the instance.
‹#›
76
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * * * * * * * * * *
n2
Grade School vs Kindergarten
a × b = a + a + a + ... + a
b
Running Time
T(n) = Time multiply
= θ(b) = linear time.
T(n) = Time multiply
= θ(n2) = quadratic time.
Which is faster?
92834765225674897 × 8388397759011039475
n = # digits = 20
Time ≈ 202 ≈ 400
b = value = 8388397759011039475
Time ≈ 8388397759011039475
‹#›
77
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * * * * * * * * * *
n2
Grade School vs Kindergarten
a × b = a + a + a + ... + a
b
Running Time
T(n) = Time multiply
= θ(b) = linear time.
T(n) = Time multiply
= θ(n2) = quadratic time.
Which is faster?
92834765225674897 × 8388397759011039475
n = # digits = 20
Time ≈ 202 ≈ 400
b = value
≈ 10n
Time ≈ 10n ≈ exponential!!!
‹#›
78
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * *
* * * * * * * * * * * * * * * *
n2
Grade School vs Kindergarten
a × b = a + a + a + ... + a
b
Running Time
T(n) = Time multiply
= θ(b) = linear time.
T(n) = Time multiply
= θ(n2) = quadratic time.
Which is faster?
92834765225674897 × 8388397759011039475
n = # digits = 20
Time ≈ 202 ≈ 400
Adding a single digit
multiplies the time by 10!
9
‹#›
79
Grade School Addition: Linear time
Grade School Multiplication: Quadratic time
For big inputs, n << n2 << 2n
input size
t
ime
Oops this was worse!
Kindergarten Multiplication: Exponential time
Fast Fourier Transform: O(n logn) time
‹#›
80
Cryptography
n = # of bits vs N = value
Confused. Lets try again.
By considering
on line banking.
‹#›
81
I set up my banking password as follows.
wolframalpha:
next prime 90843823823904
gives p = 90843823823947
next prime 38475756939837
gives q = 38475756939857
I randomly choose two n digit primes
Cryptography
n = # of bits vs N = value
‹#›
82
I set up my banking password as follows.
p = 90843823823947 and q = 38475756939857
I randomly choose two n digit primes
Cryptography
n = # of bits vs N = value
I multiply N = pq = 3495284884937375456635355579
Time =
I make this product N public.
The pq factoring is my password.
n= 14
O(n2)
142 = 196.
‹#›
83
I know his public product
N = pq = 3495284884937375456635355579
Cryptography
n = # of bits vs N = value
I simply factor to find pq.
Then I can steal his money.
O(N)
= O(1028) = O(10n) = exponential time.
loop p = 2..N
check if N/p
Time =
No problem this is linear time.
n= 28
Linear in the value N
not the number of digits n
‹#›
84
Cryptography
n = # of bits vs N = value
Linear in the value N
not the number of digits n
Work for me to
give you the instance.
Work for you to
solve it.
Time complexity is a function
N = 3495284884937375456635355579
n= 28
n= 28
Time = 1028
>> Age of universe
‹#›
85
Is this Silly?
n = # of bits vs N = value
I know other professors
would give this answer.
Silly(N)
loop i = 1..N
Print( “HI”)
Time = O(N)
Size = n = O(log(N))
N = 2O(n)
= 2O(n)
But I insist on a different answer!
‹#›
86
Is this Silly?
n = # of bits vs N = value
If N is one 32-bit integer,
then N≤231
Silly(N)
loop i = 1..N
Print( “HI”)
Time = O(N)
Size = n = O(log(N))
N = 2O(n)
= 2O(n)
= O(1)
‹#›
87
Is this Silly?
n = # of bits vs N = value
If N is stored in n
32-bit integers
Time = O(N)
Size = n = O(log(N))
N = 2O(n)
Silly(x0,x1,x2,…, xn-1)
N = Σi=0..n-1 xi (232)i
loop i = 1..N
Print( “HI”)
Time = O(n)
Size = n
Silly(x0,x1,x2,…, xn-1)
loop i = 1..n
Print( “HI”)
Clearly Exponential!
Clearly
Linear!
Silly(N)
loop i = 1..N
Print( “HI”)
‹#›
88
Specifies how the running time
depends on the size of the input.
The Time Complexity of an Algorithm
“size” of input
“time” T(n) executed .
Work for you to
read the instance.
Work for you to
solve it.
A function mapping
Work for me to
give you the instance.
‹#›
89
The Time Complexity of an Algorithm
Size of Input N,M x1,x2,…, xN
Time
for xy
log2N +log2M
N log2x
log10N +log10M
N log10x
2
N
N
(log2x)2 bit ops
(log10x)2 char ops
formal
1 value op
formal
formal
same
same
# bits
# digits
# values
values
hastel
hastel
too small
too big
standard
standard
n
O(n2)
n
O(n2)
same
n
O(1)
‹#›
90
The Time Complexity of an Algorithm
Size of Input N,M x1,x2,…, xN
Time
for xy
log2N +log2M
N
(log2x)2 bit ops
formal
1 value op
formal
# bits
# values
n
O(n2)
standard
standard
n
O(1)
‹#›
91
Amortized Time Complexity
Suppose we have a list and a user periodically wants us to add an element to the end.
5
We store it in an array.
When full, we must replace it with a larger one
How large should the new array be?
incremental strategy: n n+c
doubling strategy: n 2n
The amortized time of an add(e) operation
is the average time, i.e. TotalTime(n)/n
1
8
4
5
1
8
4
Amortized Time Complexity
Adding Element:
Time to add one element to the end is O(1).
Total Time to add n is O(n).
amortized time = TotalTime(n)/n = O(n)/n = O(1).
Amortized Time Complexity
Allocating memory:
Time to allocate an array of size k is O(k).
incremental strategy: n n+c
k = c, 2c, 3c, …, n
Total Time = c + 2c + 3c + …+ n
= c (1 + 2 + 3 + …+ n/c)
= c/2 (n/c)2
= 1/2c n2
amortized time = TotalTime(n)/n = (1/2c n2)/n = O(n).
n/c n/c n/c n/c n/c
n/c
n/c
n/c
n/c
n/c
n/c
Amortized Time Complexity
Allocating memory:
Time to allocate an array of size k is O(k).
incremental strategy: n n+c
amortized time = O(n).
doubling strategy: n 2n
k = 1, 2, 4, 8, 16, …, n
Total Time = 1 + 2 + 4 + 8 + 16 + … + n
= 2n-1
amortized time
= TotalTime(n)/n
= (2n-1)/n = O(1).
Much better!
Input: A string buildings
Output: For each, who is blocking its left view.
A prefix of the buildings are done.
The stack contains the sorted list of candidates
for the next buildings.
Pop all building that are too short for the next building,
finding his blocker.
Stack
Amortized Time Complexity
Input: A string buildings
Output: For each, who is blocking its left view.
Stack
For each building, must pop the stack searching back.
May have to pop O(n) buildings.
Time = O(n2).
Oops.
No better!
But each building, is popped
at most once.
Time = O(n).
Amortized Time Complexity
Which Input of size n?
There are 2n inputs of size n.
Which do we consider
for the time T(n)?
98
Which Input of size n?
Typical Input
But what is typical?
Average Case
For what distribution?
Worst Case
Time for all inputs is bound.
Easiest to compute
99
What is the height of tallest person in the class?
Bigger than this?
Need to look at
only one person
Need to look at
every person
Smaller than this?
100
Time Complexity of Algorithm
O(n2): Prove that for every input of size n, the algorithm takes no more than cn2 time.
Ω(n2): Find one input of size n, for which the algorithm takes at least this much time.
θ (n2): Do both.
The time complexity of an algorithm is
the largest time required on any input
of size n.
101
Time Complexity of Problem
O(n2): Provide an algorithm that solves the problem in no more than this time.
Ω(n2): Prove that no algorithm can solve it faster.
θ (n2): Do both.
The time complexity of a problem is
the time complexity of the fastest algorithm that solves the problem.
102
Classifying Functions
T(n)
10
100
1,000
10,000
log n
3
6
9
13
amoeba
n1/2
3
10
31
100
bird
10
100
1,000
10,000
human
n log n
30
600
9,000
130,000
my father
n2
100
10,000
106
108
elephant
n3
1,000
106
109
1012
dinosaur
2n
1,024
1030
10300
103000
the universe
Note: The universe contains approximately 1050 particles.
n
103
Classifying Functions
Functions
Poly Logarithmic
Polynomial
Exponential
Exp
Double Exp
Constant
(log n)5
n5
25n
5
2
n5
25n
2
<<
<<
<<
<<
<<
(log n)θ(1)
nθ(1)
2θ(n)
θ(1)
2
nθ(1)
2θ(n)
2
104
Classifying Functions
Linear
Quadratic
Cubic
?
θ(n2)
θ(n)
θ(n3)
Polynomial
= nθ(1)
θ(n4)
Others
θ(n3 log7(n))
log(n) not absorbed
because not
Mult-constant
105
Classifying Functions
Functions
Poly.
Exp.
8·24n / n100
θ(24n / n100)
8·24n + n3
nθ(1)
2θ(n)
θ(24n)
8·n2
θ(n2)
θ(n3 log(n))
7·n3log(n)
We consider two (of many)
levels in this hierarchy
Individuals
Ignore
Mult-constant
Ignore
Power-constant
106
BigOh and Theta?
5n2 + 8n + 2log n = (n2)
Drop low-order terms.
Drop multiplicative constant.
5n2 log n + 8n + 2log n = (n2 log n)
107
Which Functions are Polynomials?
nc
n0.0001
n10000
5n2 + 8n + 2log n
5n2 log n
5n2.5
Drop low-order terms.
Drop constant, log, (and poly) factors.
Ignore power constant.
Write nθ(1)
108
Which Functions are Exponential?
2n
20.0001 n
210000 n
8n
2n / n100
2n · n100
Drop low-order terms.
Drop constant, log, and poly factors.
Ignore power constant.
Write 2θ(n)
109
Notations
Theta f(n) = θ(g(n)) f(n) ≈ c g(n)
BigOh f(n) = O(g(n)) f(n) ≤ c g(n)
Omega f(n) = Ω(g(n)) f(n) ≥ c g(n)
Little Oh f(n) = o(g(n)) f(n) << c g(n)
Little Omega f(n) = ω(g(n)) f(n) >> c g(n)
110
Definition of Theta
f(n) = θ(g(n))
111
Definition of Theta
f(n) is sandwiched between c1g(n) and c2g(n)
f(n) = θ(g(n))
112
Definition of Theta
f(n) is sandwiched between c1g(n) and c2g(n)
for some sufficiently small c1 (= 0.0001)
for some sufficiently large c2 (= 1000)
f(n) = θ(g(n))
113
Definition of Theta
For all sufficiently large n
f(n) = θ(g(n))
114
Definition of Theta
For all sufficiently large n
For some definition of “sufficiently large”
f(n) = θ(g(n))
115
Definition of Theta
3n2 + 7n + 8 = θ(n2)
116
Order of Quantifiers
No! It cannot be a different c1 and c2
for each n.
f(n) = θ(g(n))
117
Gauss
∑i=1..n i = 1 + 2 + 3 + . . . + n
= ?
Arithmetic Sum
118
1 2 . . . . . . . . n
= number of white dots
1
+
2
+
3
+
. . .
+
n-1
+
n
=
S
Adding Made Easy
119
1
+
2
+
3
+
. . .
+
n-1
+
n
=
S
n
+
n-1
+
n-2
+
. . .
+
2
+
1
=
S
1 2 . . . . . . . . n
= number of yellow dots
n . . . . . . . 2 1
= number of white dots
Adding Made Easy
120
1
+
2
+
3
+
. . .
+
n-1
+
n
=
S
n
+
n-1
+
n-2
+
. . .
+
2
+
1
=
S
= number of white dots
= number of yellow dots
n+1 n+1 n+1 n+1 n+1
n
n
n
n
n
n
= n(n+1) dots in the grid
2S dots
S =
n (n+1)
2
Adding Made Easy
121
Gauss
∑i=1..n i = 1 + 2 + 3 + . . . + n
= Q(# of terms · last term)
Arithmetic Sum
True when ever terms increase slowly
122
∑i=0..n 2i = 1 + 2 + 4 + 8 +. . . + 2n
= ?
Geometric Sum
123
Geometric Sum
124
∑i=0..n 2i = 1 + 2 + 4 + 8 +. . . + 2n
= 2 · last term – 1
Geometric Sum
125
∑i=0..n ri = r0 + r1 + r2 +. . . + rn
= ?
Geometric Sum
126
S
Sr
r
r
r
.
.
.
r
r
S
Sr
1
r
S
r
1
r
1
2
3
n
n
1
n
1
n
1
=
1
r
r
r
.
.
.
2
3
r
n
+
+
+
+
+
=
+
+
+
+
+
–
=
–
=
–
–
+
+
+
Geometric Sum
127
∑i=0..n ri
r
1
r
1
n
1
=
–
–
+
Geometric Sum
When r>1?
128
∑i=0..n ri
r
1
r
1
n
1
=
–
–
+
Geometric Sum
= θ(rn)
Biggest Term
When r>1
129
∑i=0..n ri = r0 + r1 + r2 +. . . + rn
= Q(biggest term)
Geometric Increasing
True when ever terms increase quickly
130
∑i=0..n ri
=
Geometric Sum
When r<1?
1 r
1 r
n
1
+
-
-
131
∑i=0..n ri
=
Geometric Sum
1 r
1 r
n
1
+
-
-
= θ(1)
When r<1
Biggest Term
132
∑i=0..n ri = r0 + r1 + r2 +. . . + rn
= Q(1)
Bounded Tail
True when ever terms decrease quickly
133
∑i=1..n 1/i
= 1/1 + 1/2 + 1/3 + 1/4 + 1/5 + …+ 1/n
= ?
Harmonic Sum
134
f(i) = 1
∑i=1..n f(i) = n
n
Sum of Shrinking Function
135
f(i) = ?
∑i=1..n f(i) = n1/2
n
Sum of Shrinking Function
136
f(i) = 1/2i
∑i=1..n f(i) = 2
¥
Sum of Shrinking Function
137
f(i) = 1/i
∑i=1..n f(i) = ?
n
Sum of Shrinking Function
138
Harmonic Sum
139
∑i=1..n 1/i
= 1/1 + 1/2 + 1/3 + 1/4 + 1/5 + …+ 1/n
= Q(log(n))
Harmonic Sum
140
Approximating Sum
by Integrating
The area under the curve
approximates the sum
∑i=1..n f(i) ≈ ∫x=1..n f(x) dx
141
Adding Made Easy
(For +, -, ×, , exp, log functions f(n))
142
Adding Made Easy
143
Geometric Like: If f(n) ³ 2Ω(n), then ∑i=1..n f(i) = θ(f(n)).
Arithmetic Like: If f(n) = nθ(1)-1, then ∑i=1..n f(i) = θ(n · f(n)).
Harmonic: If f(n) = 1/n , then ∑i=1..n f(i) = logen + θ(1).
Bounded Tail: If f(n) £ n-1-Ω(1), then ∑i=1..n f(i) = θ(1).
(For +, -, ×, , exp, log functions f(n))
This may seem confusing, but it is really not.
It should help you compute most sums easily.
Adding Made Easy
144
Logs and Exp
properties of logarithms:
logb(xy) = logbx + logby
logb (x/y) = logbx - logby
logbxa = alogbx
logba = logxa/logxb
properties of exponentials:
a(b+c) = aba c
abc = (ab)c
ab /ac = a(b-c)
b = a logab
bc = a c*logab
Easy.
I choose a trillion trillion.
Say, I have a game for you.
We will each choose an integer.
You win if yours is bigger.
I am so nice, I will even let you go first.
Well done. That is big!
Understand Quantifiers!!!
But I choose a trillion trillion and one
so I win.
146
You laugh but this is
a very important game
in theoretical computer science.
You choose the size of your Java program.
Then I choose the size of the input.
Likely |I| >> |J|
So you better be sure your Java program can handle such long inputs.
Understand Quantifiers!!!
147
The first order logic we can state
that I win the game:
“x, $y, y>x
The proof:
Let x be an arbitrary integer.
Let y = x+1
Note y = x+1 >x
Understand Quantifiers!!!
Good game.
Let me try again. I will win this time!
148
Understand Quantifiers!!!
Fred
Trudeau
Bob
Sam
Fred
John
Scheer
Bob
Sam
Trudeau
In both examples,
every person loves a politician.
The difference?
One politician
Could be a
different politician
149
Understand Quantifiers!!!
One politician
Could be a
different politician
Fred
Trudeau
Bob
Sam
Fred
John
Scheer
Bob
Sam
Trudeau
“There is a politician whom everybody loves.”
“For each voter, there is a politician.”
Not clear.
“There is a inverse (eg 1/3) for every reals.”
“For each person, there is God.”
150
Understand Quantifiers!!!
$politician, “voters, Loves(v, p)
“voters, $politician, Loves(v, p)
Fred
Trudeau
Bob
Sam
Fred
John
Scheer
Bob
Sam
Trudeau
One politician
Could be a
different politician
151
John
“There is a politician that is loved by everyone.”
This statement is “about” a politician.
The existence of such a politician.
We claim that this politician is “loved by everyone”.
$politician, “voters, Loves(v, p)
“voters, $politician, Loves(v, p)
[
]
[
]
“Every voter loves some politician.”
This statement is “about” voters.
Something is true about every voter.
We claim that he “loves some politician.”
Understand Quantifiers!!!
Fred
Trudeau
Bob
Sam
Fred
John
Scheer
Bob
Sam
Trudeau
152
John
“There is a politician that is loved by everyone.”
This statement is “about” a politician.
The existence of such a politician.
We claim that this politician is “loved by everyone”.
$politician, “voters, Loves(v, p)
“voters, $politician, Loves(v, p)
[
]
[
]
“Every voter loves some politician.”
This statement is “about” voters.
Something is true about every voter.
We claim that he “loves some politician.”
Understand Quantifiers!!!
Fred
Trudeau
Bob
Sam
Fred
John
Scheer
Bob
Sam
Trudeau
153
A Computational Problem P states
for each possible input I
what the required output P(I) is.
An Algorithm/Program/Machine M is
a set of instructions
(described by a finite string “M”)
on a given input I
follow instructions and
produces output M(I)
or runs for ever.
Eg: Sorting
Eg: Insertion Sort
Understand Quantifiers!!!
M(I)=P(I)
“I,
$M,
Problem P is
computable if
There exists is a single algorithm/machine
that solves P for every input
Understand Quantifiers!!!
155
Let’s understand this deeply
Which variable is this statement about?:
Machine M
Input I
Problem P
M(I)=P(I)
“I,
$M,
Problem P is
computable if
Understand Quantifiers!!!
156
Let’s understand this deeply
Which variable is this statement about?:
Machine M
Input I
Problem P
Here the P is considered free variable
because it is not bounded by a $ or a ” quantifier.
Hence, the sentence says something about P.
= true
= false
M(I)=P(I)
“I,
$M,
Problem P is
computable if
Understand Quantifiers!!!
157
Let’s understand this deeply
M(I)=P(I)
“I,
$M,
Problem P is
computable if
Understand Quantifiers!!!
$M is the left most quantifier. Hence,
this statement is “about” the collection of machines,
i.e. the existence of at least one with some property.
$M,
Computes(P,M)
158
Let’s understand this deeply
M(I)=P(I)
“I,
$M,
Problem P is
computable if
Understand Quantifiers!!!
Build understanding by building the statement backwards.
What does each subsequence say about its free variables.
(with little emphasis on the bound variables.)
“Machine M computes P on input I”.
M(I)=P(I)
“I,
$M,
M(I)=P(I)
“I,
M(I)=P(I)
“Machine M computes P”.
“ P is computable”.
159
M(I)=P(I)
“I,
$M,
Problem P is
computable if
There exists is a single algorithm/machine
that solves P for every input
Understand Quantifiers!!!
Play the following game to prove it!
160
M(I)=P(I)
“I,
$M,
Problem P is
computable if
Understand Quantifiers!!!
Two players:
a prover and a disprover.
161
M(I)=P(I)
“I,
$M,
Problem P is
computable if
Understand Quantifiers!!!
They read the statement left to right.
I produce the object when it is a $.
I produce the object when it is a “.
I can always win
if and only if
statement is true.
The order the players go
REALY matters.
162
I have a machine M that I claim works.
I win if M on input I gives
the correct output
Oh yeah, I have an input I for which it does not.
M(I)=P(I)
“I,
$M,
Problem P is
computable if
What we have been doing all along.
Understand Quantifiers!!!
163
“M, $I, M(I) P(I)
M(I)=P(I)
“I,
$M,
Problem P is
uncomputable if
I win if M on input I gives
the wrong output
I have a machine M that I claim works.
I find one counter example input I for which
his machine M fails us.
Problem P is
computable if
Generally very hard to do.
Understand Quantifiers!!!
164
“M, $I, M(I) Halting(I)
M(I)=Sorting(I)
“I,
$M,
“I, $M, M(I) = Halting(I)
The order the players go
REALY matters.
If you don’t know if it is true or not, trust the game.
true
Problem P is
uncomputable if
Problem P is
computable if
true
Understand Quantifiers!!!
165
“M, $I, M(I) Halting(I)
Given I either
Halting(I) = yes or
Halting(I) = no.
I give you an input I.
“I, Myes(I) says yes
“I, Mno(I) says no
“I, $M, M(I) = Halting(I)
I don’t know which, but one of these does the trick.
true
true
M(I)=Sorting(I)
“I,
$M,
Problem P is
computable if
true
Problem P is
uncomputable if
A tricky one.
Understand Quantifiers!!!
166
Problem P is computable in polynomial time.
Problem P is not computable in polynomial time.
Problem P is computable in exponential time.
The computational class “Exponential Time”
is strictly bigger than the computational
class “Polynomial Time”.
M, c,”I, M(I)=P(I) & Time(M,I) ≤ |I|c
M, c, I, M(I)≠P(I) or Time(M,I) > |I|c
M, c, I, M(I)=P(I) & Time(M,I) ≤ 2c|I|
[ M, c, I, M(I)≠P(I) or Time(M,I) > |I|c]
[ M, c,”I, M(I)=P(I) & Time(M,I) ≤ 2c|I|]
P,
&
Understand Quantifiers!!!
167
End
Brute Force (infeasible): Exp = 2n
Slow Sorting: Quadratic = n2
Looking at Input: Linear = n
Binary Search: log(n)
Touch Input: Constant
input size
t
ime
Growth Rates
For big inputs,
100000 log n << 0.000001 n2
100000 << 0.000001 log n
169
The Importance of Analyzing Run Time
Thu, 26 Jul 2001 00:50:03 +0300
Subject: New results on WEP (via Matt Blaze)
WEP is the security protocol used in the widely deployed IEEE 802.11 wireless LAN’s. This protocol received a lot of attention this year, and several groups of researchers have described a number of ways to bypass its security.
Attached you will find a new paper which describes a truly practical direct attack on WEP’s cryptography. It is an extremely powerful attack which can be applied even when WEP’s RC4 stream cipher uses a 2048 bit secret key (its maximal size) and 128 bit IV modifiers (as proposed in WEP2). The attacker can be a completely passive eavesdropper (i.e., he does not have to inject packets, monitor responses, or use accomplices) and thus his existence is essentially undetectable. It is a pure known-ciphertext attack (i.e., the attacker need not know or choose their corresponding plaintexts). After scanning several hundred thousand packets, the attacker can completely recover the secret key and thus decrypt all the ciphertexts. The running time of the attack grows linearly instead of exponentially with the key size, and thus it is negligible even for 2048 bit keys.
Adi Shamir
Source: The Risks Digest (catless.ncl.ac.uk/Risks)
Last Updated: 15-01-13
EECS 2011
Prof. J. Elder
– ‹#› –
The Importance of Analyzing Run Time
Sat, 31 May 2003 10:22:56 -0400
Denial of Service via Algorithmic Complexity Attacks
Scott A. Crosby
Dan S. Wallach
Department of Computer Science, Rice University
We present a new class of low-bandwidth denial of service attacks that exploit algorithmic deficiencies in many common applications’ data structures. Frequently used data structures have “average-case” expected running time that’s far more efficient than the worst case. For example, both binary trees and hash tables can degenerate to linked lists with carefully chosen input. We show how an attacker can effectively compute such input, and we demonstrate attacks against the hash table implementations in two versions of Perl, the Squid web proxy, and the Bro intrusion detection system. Using bandwidth less than a typical dialup modem, we can bring a dedicated Bro server to its knees; after six minutes of carefully chosen packets, our Bro server was dropping as much as 71% of its traffic and consuming all of its CPU. We show how modern universal hashing techniques can yield performance comparable to commonplace hash functions while being provably secure against these attacks.
Source: The Risks Digest (catless.ncl.ac.uk/Risks)
Last Updated: 15-01-13
EECS 2011
Prof. J. Elder
– ‹#› –
Running Time
Most algorithms transform input objects into output objects.
The running time of an algorithm typically grows with the input size n.
Average case time is often difficult to determine.
We focus on the worst case running time.
Easier to analyze
Reduces risk
Last Updated: 15-01-13
EECS 2011
Prof. J. Elder
– ‹#› –
Asymptotic Analysis
In this context ‘asymptotic’ simply means ‘for large input size’.
We don’t worry about small inputs – these will be easy.
Rather we care about how run time will ultimately increase as the input size n gets larger and larger.
This will tend to limit the maximum size of input the algorithm can handle.
Last Updated: 15-01-13
EECS 2011
Prof. J. Elder
– ‹#› –
Experimental Studies
Write a program implementing the algorithm
Run the program with inputs of varying size and composition
Use a method like System.currentTimeMillis() to get an accurate measure of the actual running time
Plot the results
Last Updated: 15-01-13
EECS 2011
Prof. J. Elder
– ‹#› –
Limitations of Experiments
It is necessary to implement the algorithm, which may be difficult
Results may not be indicative of the running time on other inputs not included in the experiment.
In order to compare two algorithms, the same hardware and software environments must be used
Last Updated: 15-01-13
EECS 2011
Prof. J. Elder
– ‹#› –
Theoretical Analysis
Uses a high-level description of the algorithm instead of an implementation
Characterizes running time as a function of the input size, n.
Takes into account all possible inputs
Allows us to evaluate the speed of an algorithm independent of the hardware/software environment
Last Updated: 15-01-13
EECS 2011
Prof. J. Elder
– ‹#› –
Primitive Operations
Basic computations performed by an algorithm
Identifiable in pseudocode
Largely independent from the programming language
Assumed to take a constant amount of time
Examples:
Evaluating an expression
Assigning a value to a variable
Indexing into an array
Calling a method
Returning from a method
Last Updated: 15-01-13
EECS 2011
Prof. J. Elder
– ‹#› –
Counting Primitive Operations
By inspecting the pseudocode, we can determine the maximum number of primitive operations executed by an algorithm, as a function of the input size
Algorithm arrayMax(A, n)
# operations
currentMax = A[0] 2
for I = 1 to n – 1 do 2n
if A[i] > currentMax then 2(n -1)
currentMax = A[i] 2(n -1)
return currentMax 1
Total 6n -1
?
?
?
?
?
?
Last Updated: 15-01-13
EECS 2011
Prof. J. Elder
– ‹#› –
Estimating Running Time
Algorithm arrayMax executes 6n – 1 primitive operations in the worst case. Define:
a = Time taken by the fastest primitive operation
b = Time taken by the slowest primitive operation
Let T(n) be worst-case time of arrayMax. Then
a (6n – 1) ≤ T(n) ≤ b(6n – 1)
Hence, the running time T(n) is bounded by two linear functions
Last Updated: 15-01-13
EECS 2011
Prof. J. Elder
– ‹#› –
Growth Rate of Running Time
Changing the hardware/ software environment
Affects T(n) by a constant factor, but
Does not qualitatively alter the growth rate of T(n)
The linear growth rate of the running time T(n) is an intrinsic property of algorithm arrayMax
Last Updated: 15-01-13
EECS 2011
Prof. J. Elder
– ‹#› –
Overview
Motivation
Definition of Running Time
Classifying Running Time
Asymptotic Notation & Proving Bounds
Algorithm Complexity vs Problem Complexity
Last Updated: 15-01-13
EECS 2011
Prof. J. Elder
– ‹#› –
Constant Factors
On a logarithmic scale, the asymptotic growth rate is not affected by
constant factors or
lower-order terms
Examples
102n + 105 is a linear function
105n2 + 108n is a quadratic function
Last Updated: 15-01-13
EECS 2011
Prof. J. Elder
– ‹#› –
Polynomial Growth
Many algorithms that we encounter will have polynomial growth
In a log-log chart, the asymptotic slope of the line corresponds to the order of the polynomial.
Slope = 1
Slope = 2
Slope = 3
Last Updated: 15-01-13
EECS 2011
Prof. J. Elder
– ‹#› –
Overview
Motivation
Definition of Running Time
Classifying Running Time
Asymptotic Notation & Proving Bounds
Algorithm Complexity vs Problem Complexity
Last Updated: 15-01-13
EECS 2011
Prof. J. Elder
– ‹#› –
Asymptotic Notation
The notation was first introduced by number theorist Paul Bachmann in 1894, in the second volume of his book Analytische Zahlentheorie (“analytic number theory”).
The notation was popularized in the work of number theorist Edmund Landau; hence it is sometimes called a Landau symbol.
It was popularized in computer science by Donald Knuth, who (re)introduced the related Omega and Theta notations.
Knuth also noted that the (then obscure) Omega notation had been introduced by Hardy and Littlewood under a slightly different meaning, and proposed the current definition.
Source: Wikipedia
Last Updated: 15-01-13
EECS 2011
Prof. J. Elder
– ‹#› –
Asymptotic Notation
Our primary use of this notation is to state and prove upper and lower asymptotic bounds on run time T(n).
However the notation applies to the growth of arbitrary functions f(n).
Last Updated: 15-01-13
EECS 2011
Prof. J. Elder
– ‹#› –
Big-Oh Notation
Given functions f(n) and g(n), we say that f(n) is O(g(n)) if there are positive constants
c and n0 such that
f(n) ≤ cg(n) for n ≥ n0
Example: 2n + 10 is O(n)
2n + 10 ≤ cn
(c – 2) n ≥ 10
n ≥ 10/(c – 2)
Pick c = 3 and n0 = 10
Last Updated: 15-01-13
EECS 2011
Prof. J. Elder
– ‹#› –
Definition of “Big Oh”
Last Updated: 15-01-13
EECS 2011
Prof. J. Elder
– ‹#› –
Big-Oh Example
Example: the function n2 is not O(n)
n2 ≤ cn
n ≤ c
The above inequality cannot be satisfied since c must be a constant
Last Updated: 15-01-13
EECS 2011
Prof. J. Elder
– ‹#› –
More Big-Oh Examples
7n-2
7n-2 is O(n)
need c > 0 and n0 ≥ 1 such that 7n-2 ≤ c•n for n ≥ n0
this is true for c = 7 and n0 = 1
3n3 + 20n2 + 5
3n3 + 20n2 + 5 is O(n3)
need c > 0 and n0 ≥ 1 such that 3n3 + 20n2 + 5 ≤ c•n3 for n ≥n0
this is true for c = 5 and n0 = 20
3 log n + 5
3 log n + 5 is O(log n)
need c > 0 and n0 ≥ 1 such that 3 log n + 5 ≤ c•log n for n ≥ n0
this is true for c = 4 and n0 = 32
Last Updated: 15-01-13
EECS 2011
Prof. J. Elder
– ‹#› –
Big-Oh and Growth Rate
The big-Oh notation gives an upper bound on the growth rate of a function
The statement “f(n) is O(g(n))” means that the growth rate of f(n) is no more than the growth rate of g(n)
We can use the big-Oh notation to rank functions according to their growth rate
f(n) is O(g(n)) g(n) is O(f(n))
g(n) grows more Yes No
f(n) grows more No Yes
Same growth Yes Yes
Last Updated: 15-01-13
EECS 2011
Prof. J. Elder
– ‹#› –
Asymptotic Algorithm Analysis
The asymptotic analysis of an algorithm involves finding the running time in big-Oh notation
To perform the asymptotic analysis
We find the worst-case number of primitive operations executed as a function of the input size
We express this function with big-Oh notation
Example:
We determine that algorithm arrayMax executes at most 6n – 1 primitive operations
We say that algorithm arrayMax “runs in O(n) time”
Since constant factors and lower-order terms are eventually dropped anyhow, we can disregard them when counting primitive operations
Last Updated: 15-01-13
EECS 2011
Prof. J. Elder
– ‹#› –
192
Chart1
10 10 10
20 20 20
30 30 30
40 40 40
50 50 50
60 60 60
70 70 70
80 80 80
90 90 90
Time (ms)
Input Size
Time (ms)
100
200
50
400
600
200
900
1200
700
1600
1900
1200
2500
3000
2100
3600
3800
3200
4900
5500
4400
6400
7000
6000
8100
8800
7500
Sheet1
Input Size Time (ms)
10 100 200 50
20 400 600 200
30 900 1200 700
40 1600 1900 1200
50 2500 3000 2100
60 3600 3800 3200
70 4900 5500 4400
80 6400 7000 6000
90 8100 8800 7500
Chart1
21 21
23 22
25 23
31 25
20 24
18 23
16 22
X
A
Sheet1
X 21 23 25 31 20 18 16
A 21 22 23 25 24 23 22
Chart1
21 21
23 22
25 23
31 25
20 24
18 23
16 22
X
A
Sheet1
X 21 23 25 31 20 18 16
A 21 22 23 25 24 23 22
$
$
”
³
£
£
c
c
n
n
n
c
g
n
f
n
c
g
n
1
2
0
0
1
2
,
,
,
,
(
)
(
)
(
)
$
”
³
$
£
£
n
n
n
c
c
c
g
n
f
n
c
g
n
0
0
1
2
1
2
,
,
(
)
(
)
(
)
,
,
Chart1
1000 1000 1000
2000 2000 2000
3000 3000 3000
4000 4000 4000
best case
average case
worst case
Input Size
Running Time
10
20
30
20
40
60
30
60
90
40
80
120
Sheet1
1000 2000 3000 4000
best case 10 20 30 40
average case 20 40 60 80
worst case 30 60 90 120
Chart1
10 10 10
20 20 20
30 30 30
40 40 40
50 50 50
60 60 60
70 70 70
80 80 80
90 90 90
Time (ms)
Input Size
Time (ms)
100
200
50
400
600
200
900
1200
700
1600
1900
1200
2500
3000
2100
3600
3800
3200
4900
5500
4400
6400
7000
6000
8100
8800
7500
Sheet1
Input Size Time (ms)
10 100 200 50
20 400 600 200
30 900 1200 700
40 1600 1900 1200
50 2500 3000 2100
60 3600 3800 3200
70 4900 5500 4400
80 6400 7000 6000
90 8100 8800 7500
(Ο,Ω,Θ and all of that)
(O,W,Q and all of that)
Chart1
1 1 1
2 2 2
4 4 4
8 8 8
16 16 16
32 32 32
64 64 64
128 128 128
256 256 256
512 512 512
1024 1024 1024
3n
2n+10
n
n
3
12
1
6
14
2
12
18
4
24
26
8
48
42
16
96
74
32
192
138
64
384
266
128
768
522
256
1536
1034
512
3072
2058
1024
Sheet1
n n 2n+10 3n
1 1 12 3
2 2 14 6
4 4 18 12
8 8 26 24
16 16 42 48
32 32 74 96
64 64 138 192
128 128 266 384
256 256 522 768
512 512 1034 1536
1024 1024 2058 3072
,
00
0:,()()
cnnnfncgn
$>”³£
()
fn
()
gn
()
cgn
n
Î
()(())
fnOgn
/docProps/thumbnail.jpeg