CS计算机代考程序代写 compiler algorithm COMP20007 Design of Algorithms

COMP20007 Design of Algorithms
Growth Rate and Algorithm Efficiency
Lars Kulik Lecture 3 Semester 1, 2021
1

Assessing Algorithm “Efficiency”
Resources consumed: time and space.
We want to assess efficiency as a function of input size:
• Mathematical vs empirical assessment • Average case vs worst case
Knowledge about input peculiarities may affect the choice of algorithm.
The right choice of algorithm may also depend on the programming language used for implementation.
2

Running Time Dependencies
There are many things that a program’s running time depends on:
1. The complexity of the algorithms used
2. Input to the program
3. Underlying machine, including memory architecture 4. Language/compiler/operating system
Since we want to compare algorithms, we ignore (3) and (4); just consider units of time.
Use a natural number n as measure of (2)—size of input. Express (1) as a function of n.
3

The RAM Word Model
Assumptions
• Data is represented in words of fixed length in bits (e.g., 64-bit) • Fundamental operations on words take one unit of time
Basic arithmetic: Memory access: Comparisons: Logical operators: Bitwise operators:
+, −, ×, /
load, store
<, >, =, ̸=, ≥, ≤ &&, ||
&, |
The goal of analysis is to count the operations based on parameters such as input size or problem size.
4

Estimating Time Consumption
If c is the cost of a basic operation and g(n) is the number of times the operation is performed for input of size n,
then running time t(n) ≈ c · g(n).
5

Examples: Input Size and Basic Operation
Problem
Size measure
Basic operation
Search in list of n items
n
Key comparison
Multiply two matrices of floats
Matrix size (rows times columns)
Float multiplication
Graph problem
Number of nodes and edges
Visiting a node
6

Best, Average, or Worst Case?
The running time t(n) may well depend on more than just n. Worst-case analysis makes the most adverse assumptions about input.
Are they the worst n things your algorithm could see?
Best-case analysis makes optimistic assumptions. Are they the best n
things the algorithm could see?
Average-case analysis aims to find the expected running time across all possible input of size n. (Note: This is not an average of the worst and best cases but assumes that your input is drawn randomly from all possible inputs of size n.)
Amortised analysis takes the context of running an algorithm into account and calculates cost spread over many runs.
7

Average-case Analysis: Sequential Search
function SequentialSearch(A[0..n − 1],K ) i←0
while i < n and A[i] ̸= K do i←i+1 if i < n then return i else return −1 If the probability of a successful search is equal to p (0 ≤ p ≤ 1), then the average number of average number of key comparisons Cavg (n) is p × (n + 1)/2 + n × (1 − p). 8 Large Input Is What Matters Small input does not provide a stress test for an algorithm. As an alternative to Euclid’s algorithm (Lecture 1) we can find the greatest common divisor of m and n by testing each k no greater than the smaller of m and n, to see if it divides both. For small input (m, n), both these versions of gcd are fast. Only as we let m and n grow large do we witness (big) differences in performance. 9 The Tyranny of Growth Rate n log2 n n n log2 n n2 n3 2n n! 101 102 103 3 101 3·101 102 103 103 4·106 7 102 7·102 104 106 1030 9·10157 10 103 1·104 106 109 − − 1030 is one thousand times the number of nano-seconds since the Big Bang. At a rate of a trillion (1012) operations per second, executing 2100 operations would take a computer in the order of 1010 years. That is more than the estimated age of the Earth. 10 The Tyranny of Growth Rate 3000 2000 1000 2n 1/2n3 5n2 0 0 5 10 15 20 25 60n 11 Functions Often Met in Algorithm Classification 1: Running time independent of input. log n: Typical for “divide and conquer” solutions, for example, lookup in a balanced search tree. n: Linear. When each input element must be processed once. n log n: Each input element processed once and processing involves other elements too, for example, sorting. n2, n3: Quadratic, cubic. Processing all pairs (triples) of elements. 2n: Exponential. Processing all subsets of elements. 12 Asymptotic Analysis We are interested in the growth rate of functions: • Ignore constant factors • Ignore small input sizes 13 Asymptotics f(n)≺g(n)iff lim f(n)=0 n→∞ g(n) That is: g approaches infinity faster than f . For example, 1≺logn≺nǫ ≺nc ≺nlogn ≺cn ≺nn where 0 < ǫ < 1 < c. In asymptotic analysis, think big! For example, log n ≺ n0.0001, even though for n = 10100, 100 > 1.023.
14

Big-Oh Notation
O(g(n)) denotes the set of functions that grow no faster than g, asymptotically.
We write
when, for some c and n0,
t(n) ∈ O(g(n))
For example,
n>n0 ⇒t(n)n0 ⇒t(n)>c·g(n),forsomen0 andc. Θ is for exact order of growth.
t(n) ∈ Θ(g(n)) iff t(n) ∈ O(g(n)) and t(n) ∈ Ω(g(n)).
18

Big-Omega: What t(n) ∈ Ω(g(n)) Means
Don’t care
6
5
4
3
2
1
0
7
t(n)
c · g(n)
n0 0123456789
19

Big-Theta: What t(n) ∈ Θ(g(n)) Means
Don’t care
5
4
3
2
1
0
t(n)
7
6
c1 ·g(n)
c2 ·g(n)
n0 0123456789
20

Establishing Growth Rate
We can use the definition of O directly.
n>n0 ⇒t(n)