*
Questions that you’ll face throughout this course are: How can I gure out the runtime of an algorithm? Is one runtime faster than another? In this worksheet you’ll review some basics of algorithm runtime analysis, and touch also on algorithm correctness. We’ll usually focus on worst-case runtime, as a function of the input size, say n. Asymptotic analysis helps us understand the runtime for large n.
1 Comparing Orders of Growth for Functions
Imagine that each of the functions below represents the runtime of some algorithm on instances of size n. Give a good Θ bound for each, and then arrange these functions by increasing order of growth (fastest algorithm to slowest). Notation that we use here and throughout the course:
Original function
55n+4
1.5n lg n
2n log(n2) = 4n log n
n+n2
Notes
log n, lg n, and ln n dier by a constant factor But that does not make log n a constant factor!
(same “rank” as previous line)
(challenge problem; what’s the “best” form?)
lnn n
CPSC 320: Asymptotic Runtime Analysis Solutions
lgn = log2 n,lnn = loge n, and logn = log10 n. SOLUTION:
Good Θ bound
Θ(lg n)
Θ(n) logn lgn
(nlgn)(n+1)=n2lgn+nlgn √√n 1√
Θ(n) Θ(n lg n)
Θ(n lg n) Θ(n2 )
Θ(n2lgn) √n
Θ(n2 ) Θ(2n )
Θ(2.56n )
Θ(n!)
Θ((n+1)!) = Θ(n×n!)
√
n =(n2)n 2n
1.62n = (1.62)n = 2.56n n!
(n+1)!
nn
For the challenge problem comparison of n 2 and 2 , one approach is to make them look more similar.
Using the fact that f(x) = 2lg f(x), we have √√√
n n =(2lgn) n =2 nlgn. 222
Now, if we take the ratio of these two functions:
2n √n √nlgn=2n−2 lgn.
22
√ √n
Also,n− nlgn∈Θ(n).So,2n−2 lgngoestoinnitybutisdominatedby2n. 2
*
Copyright Notice: UBC retains the rights to this document. You may not distribute this document without permission.
1
2 Asymptotic Runtime Orders of Growth for Code
Give and briey justify good Θ bounds on the worst-case running time of each of these pseudocode
snippets sometimes dealing with an array A[1..n] of length n.
SOLUTION: We’ve annotated the pseudocode below with notes and put my conclusions on worst-case running time below. Note: any line of code that is executed takes Ω(1) time (i.e., at least some constant amount of time to process); so, anything annotated as O(1) is also Θ(1).
2.1 Finding the maximum in a list
Let max = -infinity
For each element a in A:
If max < a:
Set max to a
Return max
O(1)
n iterations
O(1)
O(1) no asymp diff to loop body runtime anyway
O(1)
Here we have n iterations of a loop whose body takes constant time to runregardless of the conditional contained insideplus some constant-time setup and teardown. Overall, this is Θ(n).
2.2 "Median-of-three" computation
Let first = A[1] O(1)
Let last = A[n] O(1)
Let middle = A[floor(n/2)] O(1)
If first <= middle And middle <= last: O(1)
return middle O(1)
Else If middle <= first And first <= last: O(1)
return first O(1)
Else: O(1)
return last O(1)
Nothing exciting happening here. Every single operation is O(1); so, it doesn't matter what happens in the conditional, the whole thing takes Θ(1) time.
2.3 Counting inversions
Let inversions = 0
For each index i from 1 to n:
For each index j from (i+1) to n:
If a[i] > a[j]:
Increment inversions
Return inversions
O(1)
n iterations
n-i iterations
O(1)
O(1)
O(1)
The inner loop takes constant time per iteration. The outer loop executes n times, and on the ith iteration the inner loop executes n − i times. This is a common pattern, so you may be able to tell quickly that the
2
overall running time is Θ(n2). But let’s work it out using sums. The runtime is:
nnn
1=(n−i) i=1 j =i+1 i=1
nn
= ( n) − ( i) i=1 i=1
= n2 − n(n + 1)
2 = n2 − n2 + n
2 = n2/2 − n/2
∈ Θ(n2)
It’s interesting to note also that in the worst case, where the array is in reverse-sorted order, we increment
inversions Θ(n2) times as well, which means there can be Θ(n2) inversions in an array of length n. 2.4 Repeated division
Let count = 0
While n > 0:
count = count + 1
n = floor(n/2)
Return count
O(1)
some number of iterations
O(1)
O(1)
O(1)
This loop isn’t just a counting loop. It divides n by two each time. How many times can we divide n by 2 before it reaches 0? Well. . . an innite number. Fortunately, we’re actually taking the oor of n/2, and so we will reach 0.
If you don’t already see where this is going, you might make a table of values of n and number of iterations to guide you toward it:
Initial value of n Number of loop iterations 00 11 22 32 43 53 63 73 84
At some point, you’ll notice that powers of 2 matter and realize that the right side is roughly the log of the left. Specically, the number of iterations is ⌊lg n⌋ + 1 (but just 0 when n = 0).
So, we have a logarithmic number of iterations; each iteration takes constant time; plus constant setup and nish time, for: Θ(lg n) runtime.
2.5 A Mystery algorithm
Let i = 0
Let s = 0
While (s <= n):
O(1)
O(1)
some number of iterations
3
i=i+1 O(1)
s=s+i O(1) Return i O(1)
To understand what is happening, you might make a table of values of i (the number of completed iterations) and the corresponding values of s.
Value of i Value of s 11 2 1+2 3 1+2+3 4 1+2+3+4 5 1+2+3+4+5
We know that i i = i(i+1). So if the While loop executes i iterations before stopping, then this
means that
(i−1)i ≤n< i(i+1) 22
j=0
/2
Solving for i using the quadratic equation, we nd that i≈ −1+√1+8n ≈√2n
2 And so i ∈ Θ(√n).
4