程序代写代做代考 C algorithm The Australian National University Research School of Computer Science

The Australian National University Research School of Computer Science
COMP3600/6466 – Algorithms
This tutorial is compiled by:
Cormac Kikkert, William Cashman, Timothy Horscroft, and Hanna Kurniawati
Exercise 1 Math Refresher
1. Calculate lim 6n n→∞ 2n
2. Calculate lim an (where a > b) n→∞ bn
Semester 2, 2020 Tutorial 1
3. Calculate lim
n2+en n→∞ 2n+e4n
4. Please compute the sum of the first consecutive n odd numbers
5. The cost of building the first level of an apartment building is 5U, and each subsequent level is more expensive by 2U. How much is the total cost for building an apartment with l levels? Please specify your answer in terms of l and U.
Solution.
1. Using index laws:
6n 􏰈6􏰉n lim n = lim
This means 2n = o(6n), or equivalently, 6n = ω(2n). 2. Similar to the previous part:
n→∞2 n→∞ 2 = lim 3n
n→∞ =∞
an 􏰆a􏰇n lim n = lim
lim 4n n→∞ 2n+e
= lim 4n n→∞ 2+4e
=H lim 2+en
n→∞b n→∞ b =∞
This is because a > b, so a > 1; if the base is greater than 1, then the exponential must grow to b
infinity. This means bn = o(an), or equivalently, an = ω(bn).
3. The key is to use L’Hˆopital’s rule twice; differentiating the top and the bottom:
n2+en H 2n+en
n→∞ = lim
n→∞ = lim
n→∞ = lim
n→∞ =0+0
=0
16e4n
􏰈2 en􏰉
16e 2
16e4n 2
16e4n
4n+ 4n 16e
+ lim en n→∞ 16e4n
+ lim 1 n→∞ 16e3n

(Alternative explanation)
Here is a is a good way to estimate the solution, but you should use a more formal proof method (e.g. L’Hˆopital’s rule) in your actual assignments.
Notice that en grows much faster than n2 and e4n grows much faster than 2n, so
Therefore
Since e <1. e4 n2 +en =Θ(en), and 2n+e4n =Θ(e4n) n2 + en en 􏰆 e 􏰇n lim 4n=lim4n=4=0 n→∞ 2n + e n→∞ e e 4. The odd numbers are all numbers of the form 2n − 1. To sum the first n odd numbers we need to evaluate the sum n 1 + 3 + 5 + · · · + (2n − 1) = 􏰃(2i − 1) i=1 Before we evaluate this sum, recall the formula for the sum of the natural numbers (it is good to know as we will be using it a lot in this course): 1+2+···+n= n(n+1) 2 Using this formula, we can evaluate the sum as nnn 􏰃(2n − 1) = 􏰃 2n − 􏰃 1 k=1 k=1 k=1 􏰌n􏰍 = 􏰃2n −n k=1 􏰌n􏰍 =2􏰃n−n k=1 = 2 · n(n + 1) − n 2 = n(n + 1) − n = n2 5. The cost of the first floor is 5U, and each subsequent floor costs 2U more than the one before. In general, the kth floor costs (2k + 3)U , so the total cost for an apartment with l levels is: Or equivalently: 5U +7U +···+(2l+3)U l 􏰃(2k + 3)U k=1 We can factor out U, then simplify: ll 􏰃(2k+3)U =U􏰃(2k+3) k=1 k=1 􏰌ll􏰍 =U 􏰃2k+􏰃3 k=1 k=1 􏰌l􏰍 =U 􏰃2k+3l k=1 􏰌l􏰍 =U 2􏰃k+3l k=1 􏰈 l(l+1) 􏰉 =U2· 2 +3l = U(l(l + 1) + 3l) = Ul(l + 1 + 3) = Ul(l + 4) Time Complexity Exercise 2 Please derive the time complexity of the following algorithms, assuming they are executed in a RAM computational model. Algorithm 1 BubbleSort(An array of integers A) 1: fori=1toA.lengthdo for j = A.length downto i + 1 do if A[j] e), or s = e and we will be on our last iteration of the loop (as any case will result in a return or s > e). Thus we can calculate an upper bound on the number of times lines 4-10 are run by considering how long it takes to reach e − s ≤ 0. We do this by showing e − s halves every iteration, by considering the 3 cases that occur during a loop:
• A[m] == v, then we break out of the loop and so we can ignore this case until e − s ≤ 1 (Because we are looking for how many times the loop runs in the worst case).
• A[m]v.Thens=m+1andsothedifferencee−saftertheloopis:
􏰊s+e􏰋 s+e e−s e−(m+1)=e− 2 −1 ≤ e− 2 = 2
and so the difference has at least halved.
Since e − s is halved after each iteration, and the loop terminates the iteration after e − s ≤ 0, the body and header of the while loop will execute O(log2 n + 1) = O(log2 n) times in the worst case. By bounding the cost of lines 4-10 above by a constant c1, and the cost of lines 1-2 and line 11 by the constant c2 and c3 respectively, we get a run time of O(c1 log n + c2 + c3) = O(log n).

Exercise 3 Estimation
Estimate an upper bound on the input size required for a modern computer to run an algorithm in 1 second. Assume modern computers execute one hundred million operations per second (you may assume a constant factor for each of the complexities e.g. O(n log n) means exactly n log n operations are required).
time complexity O(1)
O(log n)
O(√n)
O(n)
O(n log n) O(n2) O(n3) O(2n)
O(n!)
input size
n ≤ 108
Solution. As this is an approximation exercise there is no ”right” answer. However, we get the following table by assuming the constant factor in each of the algorithms below is 1 and erring on the side of caution:
time complexity O(1)
O(log n)
O(√n)
O(n)
O(n log n) O(n2) O(n3) O(2n)
O(n!)
input size trick question
n ≤ 1030,000,000
n≤1016 n≤108 n≤106 n≤5000 n≤300 n≤25 n≤10
Properties of big O
1. Suppose f1(n) = O(g1(n)) and f2(n) = O(g2(n)). Is it true that f1(n) + f2(n) = O(g1(n) + g2(n))?
Please provide a proof or counter example.
2. Suppose f1(n) = O(g1(n)) and f2(n) = O(g2(n)). Is it true that f1(n) · f2(n) = O(g1(n) · g2(n))? Please provide a proof or counter example.
Solution.
1. Yes. If f1(n) = O(g1(n)) and f2(n) = O(g2(n)) then by definition there exists constants k1,k2,n1,n2 >0with
0 ≤ f1(n) < k1g1(n), and 0 ≤ f2(n) < k2g2(n) ∀n ≥ max(n1, n2) Hence f1(n) + f2(n) ≤ k1g1(n) + k2g2(n) ≤ max(k1, k2)(g1(n) + g2(n)) when n ≥ max(n1, n2). Thus f1(n) + g1(n) = O(g1(n) + g2(n)) 2. Yes. If f1(n) = O(g1(n)) and f2(n) = O(g2(n)) then by definition there exists constants k1,k2,n1,n2 >0with
0 ≤ f1(n) < k1g1(n), and 0 ≤ f2(n) < k2g2(n) ∀n ≥ max(n1, n2) Hence f1(n)f2(n) < k1g1(n) · k2g2(n) = k1k2 g1(n) · g2(n) when n ≥ max(n1, n2). Thus f1(n) · f2(n) = O(g1(n) · g2(n)). Exercise 4