CS计算机代考程序代写 algorithm Com S 311 Section B Introduction to the Design and Analysis of Algorithms

Com S 311 Section B Introduction to the Design and Analysis of Algorithms
Lecture One for Week 3
Xiaoqiu (See-ow-chew) Huang
Iowa State University
February 9, 2021

Proving that a function is not in O(n)
Let T(n) be defined by the following recurrence for some constant
d > 0. T(1) = d,
T (n) = T (n − 1) + (dn + lg n) if n > 1. Disprove that T(n) is in O(n) (or T(n) = O(n)).
We use proof by contradiction. Assume that T(n) is in O(n). Then there are positive contants c and n0 such that
T(n) ≤ cn for all n ≥ n0.
Note that lgn ≥ 0 for n ≥ 2. By removing all the lg terms, we have T(n) ≥ d(1 + 2 + 3 + … + n) = dn(n + 1)/2 ≥ dn2/2.

Proving that a function is not in O(n)
Combining the two inequalities, we have dn2/2 ≤ cn for all n ≥ n0.
This is as the same as
n ≤ 2c/d for all n ≥ n0.
This does not hold when n = 1 + max(n0, ⌈2c/d⌉). So the assumption that T(n) is in O(n) is wrong.

Subtleties
We may need to select a tight asymptotic bound by subtracting a lower-order term, in order to complete the induction step of a proof.
Consider the recurrence for some constant d > 0 T(1) = d,
T (n) = T (⌈n/2⌉) + T (⌊n/2⌋) + d if n > 1.
We try to show by mathematical induction that T(n) ≤ cn for some constant c > 0. (1)
Assume that the inequality holds for all m < n. Thus it holds at ⌈n/2⌉ and ⌊n/2⌋. T(⌈n/2⌉) ≤ c⌈n/2⌉, and T(⌊n/2⌋) ≤ c⌊n/2⌋. Subtleties Using these inequalities, we have T(n) = T(⌈n/2⌉) + T(⌊n/2⌋) + d ≤ c⌈n/2⌉ + c⌊n/2⌋ + d = cn + d. We cannot obtain T (n) ≤ cn. We fix this problem by using a tighter upper bound on T(n) as follows: T (n) ≤ cn − e for some constant c > 0 and constant e ≥ 0.
Now we have
T(n) ≤ (c⌈n/2⌉ − e) + (c⌊n/2⌋ − e) + d
= cn − e + d − e ≤ cn − e if d − e ≤ 0 or e ≥ d.

Subtleties
Consider the base step.
We show that (1) holds at n = 1. The left side:
T(1) = d. The right side:
c × 1 − e = c − e ≥ d = T (1) if c ≥ e + d .
Set e to d and c to 2d. Then we have for some constant d > 0 T (n) ≤ 2dn − d ≤ 2dn for all n ≥ 1.

Avoiding pitfalls
Consider T(n) defined by the recurrence T(1) = 1,
T (n) = 2T (⌊n/2⌋) + n if n > 1.
What is wrong with the following proof that T(n) is in O(n)? We want to show that
T(n) ≤ cn for some constant c > 0. Assume that
T(⌊n/2⌋) ≤ c⌊n/2⌋.
Then we conclude that
T(n) = 2T(⌊n/2⌋) + n ≤ 2(c⌊n/2⌋) + n ≤ 2c(n/2) + n = cn + n = O(n).
The error is that we fail to conclude that T (n) ≤ cn + n ≤ cn.

Changing variables
Consider T(n) defined by the recurrence T (n) = 2T (2⌊(lg n)/2⌋) + lg n.
We simplify the recurrence by changing the variable n = 2m T(2m) = 2T(2⌊m/2⌋) + m.
By defining S(m) = T(2m), we obtain the new recurrence S(m) = 2S(⌊m/2⌋) + m.
It can be shown by mathematical induction that S(m) is in O(m lg m).
Changing back from S(m) to T(n), we obtain
T(n) = T(2m) = S(m) = O(mlgm) = O(lgnlglgn).

Solving recurrences with the recursion-tree method
We draw a recursion tree to find a good guess for the recurrence.
Each node in a recursion tree is associated with the cost of a function invocation, with the cost of the non-recursive part shown at the node and with its children associated with the costs of the recursive function invocations.
We add the costs within each level of the tree to obtain a total of per-level costs.
Then we add all the per-level costs to obtain the total cost of all levels of the recursion.
Consider the recurrence for some constant d > 0 T(n)=dn2 forn<4, T(n)=3T(⌊n/4⌋)+dn2 ifn≥4. Assume that n is a power of 4. T(n) dn^2 /|\ /|\ /|\ /|\ 3 T(n/4) d(n/4)^2 d(n/4)^2 /|\ /|\ /|\ /|\ /|\ /|\ /|\/|\/|\ /|\/|\/|\ d(n/16)^2 d(n/16)^2 d(n/16)^2 d(n/16)^2 d(n/16)^2 d(n/16)^2 d(n/16)^2 d(n/16)^2 d(n/16)^2 3^2 T(n/16) ............................................... ................................................... ....................................................... 3^{log_4 (n)} T(1) at the lowest level 3^{\log_4 {n} }d d(n/4)^2 3d(n/4)^2 dn^2 3^2d(n/16)^2 Solving recurrences with the recursion-tree method We add up the costs over all levels to obtain the total cost for the tree T(n) = dn2 + 3d(n/4)2 + 32d(n/42)2 + 33d(n/43)2 + ... +3(log4 n−1)d(n/4log4 n−1)2 + 3log4 nT(n/4log4 n) =dn2+ 3dn2+(3)2dn2+(3)3dn2+... 16 16 16 +( 3 )log4 n−1dn2 + nlog4 3T (1) 16 = 􏰇log4 n−1( 3 )i dn2 + nlog4 3d i=0 16 ≤􏰇∞ (3)idn2+nlog43d i=0 16 = 1 dn2 + dnlog4 3 1−(3/16) = ≤ 16 dn2 + dnlog4 3 13 16 dn2 + dn2 = 29 dn2 = cn2 for a constant c = 29 d > 0. 1313 13

Proving by mathematical induction
We prove by mathematical induction that T(n)≤cn2 forsomeconstantc>0.
Base: For1≤n<4,wehave T(n) = dn2 ≤ cn2 if c ≥ d. Induction: Assume that for every m < n, we have T(m) ≤ cm2. Setting m = ⌊n/4⌋, we obtain T(⌊n/4⌋) ≤ c(⌊n/4⌋)2. Then we have T(n)=3T(⌊n/4⌋)+dn2 ≤3c(⌊n/4⌋)2 +dn2 ≤3c(n/4)2 +dn2 = 3 cn2 +dn2 ≤cn2 16 if 3c+d≤corc≥16d>0. 16 13

Another example
Consider the recurrence for some constant d > 0 T(n) = d(n − 1) if n < 3, T(n) = T(⌊n/3⌋) + T(⌊2n/3⌋) + dn if n ≥ 3. Assume that n is a power of 3. Note that T(n/3) = T(n/32) + T(2n/32) + d(n/3), and T(2n/3) = T(2n/32) + T(4n/32) + d(2n/3). The sum of all costs at this level is d(n/3) + d(2n/3) = dn. Another example At the next level, we have T(n/32) = T(n/33) + T(2n/33) + d(n/32), T(2n/32) = T(2n/33) + T(4n/33) + d(2n/32), T(4n/32) = T(4n/33) + T(8n/33) + d(4n/32). The sum of all costs at this level is d(n/32) + 2d(2n/32) + d(4n/32) = dn. The longest simple path from the root to a leaf is n → (2/3)n → (2/3)2n... → 1, where (2/3)k n = 1 or (3/2)k = n. Thus the height of the tree is k = log3/2 n. The sum of costs over all levels is the number of levels times the cost of each level, or O(dn log3/2 n) = O(n lg n). Proving by mathematical induction We prove by mathematical induction that T(n) ≤ cnlgn for some constant c > 0.
Base: For1≤n<3,wehave T (n) = d (n − 1) ≤ cn lg n if c ≥ d . Induction: Assume that for every 1 ≤ m < n, we have T(m) ≤ cmlgm. Setting m = ⌊n/3⌋ and m = ⌊2n/3⌋, respectively, we obtain T (⌊n/3⌋) ≤ c (⌊n/3⌋) lg(⌊n/3⌋), T (⌊2n/3⌋) ≤ c (⌊2n/3⌋) lg(⌊2n/3⌋). Proving by mathematical induction Then we have T(n) = T(⌊n/3⌋) + T(⌊2n/3⌋) + dn ≤ c(⌊n/3⌋) lg(⌊n/3⌋) + c(⌊2n/3⌋) lg(⌊2n/3⌋) + dn ≤ c(n/3) lg(n/3) + c(2n/3) lg(n/(3/2)) + dn = [c(n/3) lg n − c(n/3) lg 3] + [c(2n/3) lg n − c(2n/3) lg(3/2)] + dn = cn lg n − c[(n/3) lg 3 + (2n/3) lg(3/2)] + dn = cn lg n − c[(n/3) lg 3 + (2n/3) lg 3 − (2n/3) lg 2] + dn = cn lg n − cn[lg 3 − 2/3] + dn ≤ cnlgn, if −cn[lg 3 − 2/3] + dn ≤ 0 or c ≥ d/(lg 3 − (2/3)). The master method for solving recurrences The master method is used to determine the running time T(n) of an algorithm that divides a problem of size n into a subproblems, each of size ⌈n/b⌉ or ⌊n/b⌋, where a ≥ 1 and b > 1 are positive constants.
The a subproblems are solved recursively, each in T(⌈n/b⌉) or T (⌊n/b⌋) time. Let the asymptotically positive function f (n) denote the cost of dividing the problem and of combining the results of the subproblems.
Then T(n) is defined by the following recurrence T(n) = aT(⌈n/b⌉) + f (n), or
T (n) = aT (⌊n/b⌋) + f (n).
The master method is based on the following theorem.

Master Theorem
Theorem 4.1 (Master theorem)
Let f (n) and T (n) with positive constants a and b be defined in the above recurrence.
1. If f (n) = O(nlogb a−ε) for some constant ε > 0, then T(n) = Θ(nlogb a).
2. Iff(n)=Θ(nlogba),thenT(n)=Θ(nlogbalgn).
3. If f (n) = Ω(nlogb a+ε) for some constant ε > 0, and if
af (⌈n/b⌉) ≤ cf (n) (or af (⌊n/b⌋) ≤ cf (n)) for some constant c < 1 and all sufficiently large n, then T (n) = Θ(f (n)). Understanding the theorem In other words, the solution to the recurrence is determined by the larger of f (n) and nlogb a. In case 1, the function nlogb a is the larger, and the solution is T(n) = Θ(nlogb a). In case 3, the function f (n) is the larger, and the solution is T (n) = Θ(f (n)). In case 2, the two functions are equal to within a constant factor, and the solution is T (n) = Θ(f (n) lg n), where a logarithmic factor is included in the solution. Understanding the theorem In case 1, f (n) must be asymptotically smaller than nlogb a by a factor of nε for some constant ε > 0.
In case 3, f (n) must be asymptotically larger than nlogb a by a factor of nε for some constant ε > 0, and in addition satisfy the ‘regularity’ condition that af (n/b) ≤ cf (n).

Using the master method
Consider the recurrence
T (n) = 4T (⌊n/2⌋) + f (n), where (a) f (n) = n3/2,
(b) f (n) = n2, or
(c) f (n) = n5/2.
In this recurrence, a = 4 ≥ 1 and b = 2 > 1, and f (n) in each case is asymptotically positive. So they meet the assumptions for the Master theorem.
We compare f (n) with nlogb a = nlog2 4 = n2.
Incase(a),f(n)=n3/2 =O(n2−ε)forε=1/2=0.5. Case1of the master theorem applies to the function n3/2, so we have the solution T (n) = Θ(n2).

Using the master method
In case (b), f (n) = n2 = Θ(nlogb a). Case 2 of the master theorem applies to the function n2, so we have the solution
T (n) = Θ(n2 lg n).
Incase(c),f(n)=n5/2 =O(n2+ε)forε=1/2=0.5. And af (⌊n/b⌋) = 4(⌊n/2⌋5/2) ≤ 4(n/2)5/2
=4n5/2 = 4 n5/2=cn5/2forc= 4 <1. 25/2 25/2 25/2 Case 3 of the master theorem applies to the function n5/2, so we have the solution T (n) = Θ(n5/2).