COMP3121/9101 (Tutorial 2) School of Computer Science and Engineering, UNSW Sydney
To help you with what problems to try, problems marked with [K] are key questions that tests you on the core concepts, please do them first. Problems marked with [H] are harder problems that we recommend you to try after you complete all other questions (or perhaps you prefer a challenge). Good luck!!!
1 Recurrences
2 Divide and Conquer
Copyright By PowCoder代写 加微信 powcoder
3 Karatsuba’s trick
4 Selection
COMP3121/9101 (Tutorial 2) School of Computer Science and Engineering, UNSW Sydney §1 Recurrences
Exercise 1.1. [K] Determine the asymptotic growth rate of the solutions to the following recurrences. You may use the Master Theorem if it is applicable.
(a) T(n)=2T(n/2)+n(2+sinn). (b) T(n) = 2T(n/2)+√n+logn.
(c) T(n) = 8T(n/2) + nlog2 n. (d) T(n)=T(n−1)+n.
Solution. (a) Here,werealisethata=2,b=2,andf(n)=n(2+sinn). Butsinn≤1forallnandso, f(n) = Θ(n). The critical exponent is
c∗ =logba=log22=1. Thus, the second case of the Master Theorem applies and we get
T (n) = Θ(n log n).
(b) Again, we repeat the same process. We realise that a = 2, b = 2, and f(n) = √n + log n. So, the
critical exponent is c∗ = 1. Since log n eventually grows slower than √n, we have that f(n) = Θ(√n). = Θ n1/2 .
This implies that
so the first case of the Master Theorem applies and we obtain T (n) = Θ(n).
(c) Here, a = 8, b = 2, and f(n) = nlog2 n. So the critical exponent is c∗ =logba=log28=3.
On the other hand, for large enough n, we have that log2 n ≥ 4. So f(n) = nlog2 n = Ω(n4).
Consequently,
To be able to use the third case of the Master Theorem, we have to show that for some 0 < c < 1,
f(n) = Ω(nc∗+1). the following holds for sufficiently large n:
In our case, this translates to
af n < cf(n). b
8nlog2 n2
so the required inequality is satisfied with c = 12 for all n > 16. Therefore, by case 3 of the Master
Theorem, the solution is
T(n) = Θ(f(n)) = Θnlog2 n.
(d) Note that the Master Theorem does not apply; however, we can alter the proof of the Master Theorem to obtain the solution to the recurrence. For every k, we have T (k) = T (k − 1) + k. So unrolling the recurrence, we obtain
8nlog2n2 <1nlog2n, 22
T (n) = T (n − 1) + n
= [T (n − 2) + (n − 1)] +n
= T (n − 2) + (n − 1) + n
= [T(n−3)+(n−2)]+(n−1)+n
= T (1) + (n − (n − 2)) + (n − (n − 3)) + · · · + (n − 1) + n = T (1) + (2 + 3 + · · · + n)
= T (1) + n(n + 1) − 1 2
(d) T(n) = 3T(3n) + n.
Solution. (a) The Master Theorem requires the value of a to be constant. Here, the value of a = 2n
depends on the input size which is non-constant. 3
Exercise 1.2. [K] Explain why we cannot apply the Master Theorem to the following recurrences. (a) T(n)=2nT(n/2)+nn.
(b) T(n)=T(n/2)−n2logn.
(c) T(n)= 13T(n/3)+n.
COMP3121/9101 (Tutorial 2) School of Computer Science and Engineering, UNSW Sydney (b) The Master Theorem requires f(n) to be non-negative. We can see that, for n ∈ N, f(n) ≤ 0. So,
the Master Theorem cannot be applied.
(c) The Master Theorem requires a ≥ 1. Here, a = 1/3 and so, the conditions of the Master Theorem are not met.
(d) The Master Theorem requires b > 1. However, we see that we can write T (n) as n
T(n)=3T 1/3 +n;
in this case, we see that b = 1/3 < 1. So the conditions of the Master Theorem are not met.
Exercise 1.3. [H] Consider the following naive Fibonacci algorithm.
Algorithm 1.1 F(n): The naive Fibonacci algorithm Require: n ≥ 1
if n=1orn=2then return 1
return F(n−1)+F(n−2)
When analysing its time complexity, this yields us with the recurrence T (n) = T (n − 1) + T (n − 2). Show √
that this yields us with a running time of Θ(φn), where φ = 1+ 5 which is the golden ratio. How do you
propose that we improve upon this running time?
Hint — This is a standard recurrence relation. Guess T (n) = an and solve for a.
Solution. We solve T (n) = T (n − 1) + T (n − 2). Using the standard trick of solving recurrence relations, we guess T (n) = an for some a. Then this produces the recurrence:
an = an−1 + an−2. Dividing both sides by an−2, we obtain the quadratic equation:
Then the quadratic equation yields In other words, we obtain
a2 − a − 1 = 0. √
T (n) = A ·
1 + √5n 2
1 − √5n + B · 2
= Θ (φn) ,
whereφ=1+ 5. 2
The inefficiency comes from the fact that we are making unnecessary and repeated calls to problems that we have already seen before. For example, to compute F (n), we compute F (n − 1) and F (n − 2). To compute F (n − 1), we compute F (n − 2) again. Because these values never change, a way to improve the overall running time is to cache the results that we have computed previously and only make O(1) calls for results that we have already computed. This reduces our complexity dramatically from exponential to O(n). This is an example of dynamic programming, something we will see later down the track. ■
COMP3121/9101 (Tutorial 2) School of Computer Science and Engineering, UNSW Sydney
§2 Divide and Conquer
Exercise 2.1. [K] You are given a 2n × 2n board with one of its cells missing (i.e., the board has a hole). The position of the missing cell can be arbitrary. You are also given a supply of “trominoes”, each of which can cover three cells as below.
Design an algorithm that covers the entire board (except for the hole) with these “trominoes”. Note that the trominoes can be rotated and your solution should not try to brute force all possible placement of those “trominoes”.
Solution. We proceed by a divide and conquer recursion; thus, we split the board into 4 equal parts:
We can now apply our recursive procedure to the top left board which has a missing square. To be able to apply recursion to the remaining 3 squares we place a domino at the centre as shown below.
We treat the covered squares in each of the three pieces as a missing square and can proceed by recursion applied on all 4 squares whose sides are half the size of the size of the original square.
Exercise 2.2. [K] Given positive integers M and n, compute M n using only O(log n) many multiplications.
Solution. Note that
ifniseven ×M ifnisodd.
M = n−12 M2
COMP3121/9101 (Tutorial 2) School of Computer Science and Engineering, UNSW Sydney
Hence, we can proceed by divide and conquer. If n is even, we recursively compute M n2 and then square n−1
it. If n is odd, we recursively compute M 2 , square it and then multiply by another M. Since n is (approximately) halved in each recursive call, there are at most O(log n) recursive calls, and since we perform only one or two multiplications in each recursive call, the algorithm performs O(logn) many
multiplications, as required.
Alternative Solution: Any positive integer n can be uniquely written in base 2, and therefore can be uniquely written as the sum of distinct powers of 2. Thus, Mn can be written the product of powers of M where the index is itself a power of 2, i.e. M,M2,M4,M8,.... We can obtain these powers of M in O(logn) time by repeated squaring, and then multiply together the appropriate powers to get Mn. The appropriate powers to multiply are the powers M2i such that the ith least significant bit of the binary representation of n is 1. For example, to obtain M11, the binary representation of 11 is (1011)2, and hence we should multiply together M, M2, and M8. ■
Exercise 2.3. [H] Suppose you are given an array A containing 2n numbers. The only operation that you can perform is make a query if element A[i] is equal to element A[j], 1 ≤ i, j ≤ 2n. Your task is to determine if there is a number which appears in A at least n times using an algorithm which runs in linear time.
Solution. Create two arrays B and C, initially both empty. Repeat the following procedure n times: • Pick any two elements of A and compare them.
– If the numbers are different, discard both of them.
– If instead the two numbers are equal, append one of them to B and the other to C.
Proof. Suppose such a value x exists. In a pair of distinct numbers, at most one of them can be x, so after discarding both, x still makes up at least half the remaining array elements. Repeating this, x must account for at least half the values appearing in B and C together. But B and C are identical, so at least the entries of B are equal to x.
We can now apply the same algorithm to B, and continue reducing the instance size until we reach an array of size two. The elements of this array are the only candidates for the original property, i.e., the only values that could have appeared n times in the original array.
There is a special case; B and C could be empty after the first pass. In this case, the only candidates for the property are the values which appear in the last pair to be considered. For each of these, perform a linear scan to find their frequency in the original array, and report the answer accordingly.
Finally we analyse the time complexity. Clearly, each step of the procedure repeated n times above takes constant time, so we can reduce the problem from array A (of length 2n) to array B (of length ≤ n) in Θ(n) time. Letting the instance size be half the length of the array (to more neatly fit the Master Theorem), we can express the worst case using the recurrence
T (n) = T n + Θ(n). 2
Hint — A rather a tricky one !! The reasoning resembles a little bit the reasoning used in the celebrity problem (Tutorial Question 1.5): try comparing them in pairs and first find one or at most two possible candidates and then count how many times they appear.
Claim — If a value x appears in at least half the entries of A, then the same value also appears in at least half the entries of B.
COMP3121/9101 (Tutorial 2) School of Computer Science and Engineering, UNSW Sydney Now a f (n/b) = f (n/2), which is certainly less than say 0.9f (n) for large n as f is linear. By case 3 of the
Master Theorem, the time complexity is T (n) = Θ(f (n)) = Θ(n), as required. ■
Exercise 2.4. [H] You and a friend find yourselves on a TV game show! The game works as follows. There is a hidden N × N table A. Each cell A[i, j] of the table contains a single integer and no two cells contain the same value. At any point in time, you may ask the value of a single cell to be revealed. To win the big prize, you need to find the N cells each containing the maximum value in its row. Formally, you need to produce an array M[1..N] so that A[r,M[r]] contains the maximum value in row r of A, i.e., such that A[r, M [r]] is the largest integer among A[r, 1], A[r, 2], . . . , A[r, N ]. In addition, to win, you should ask at most 2N⌈logN⌉ many questions. For example, if the hidden grid looks like this:
Column 1 Column 2 Column 3 Column 4 Row1 10 5 8 3 Row2 1 9 7 6 Row3 -3 4 -1 0 Row4 -10 -9 -8 2
then the correct output would be M = [1, 2, 2, 4].
Your friend has just had a go, and sadly failed to win the prize because they asked N2 many questions which is too many. However, they whisper to you a hint: they found out that M is non-decreasing. Formally, they tell you that M[1] ≤ M[2] ≤ ··· ≤ M[N] (this is the case in the example above).
Design an algorithm which asks at most O(N log N) many questions that produces the array M correctly, even in the very worst case.
Solution. We first find M[N/2]. We can do this in N questions by simply examining each element in row N/2, and finding the largest one.
SupposeM[N/2]=x. Then,weknowthatM[i]≤xforalli
N A1.. 2−1 [1..x]
Hint — Note that you do not have enough time to find out the value of every cell in the grid! Try determining M[N/2] first, and see if divide-and-conquer is of any assistance.
It remains to show that this approach uses at most 2N⌈logN⌉ questions.
Claim — At each stage of recursion, at most two cells of any column are investigated. Proof. We observe the following.
• In the first call of the recursion, only one cell of each column has been investigated. These cells are marked in blue, with the maximum in dark blue.
• In the second call of the recursion, we investigate two cells of one column, and one cell of each of the other columns. These cells are marked in green, with the maxima in dark green.
A 2 +1 ..N [x..N].
COMP3121/9101 (Tutorial 2) School of Computer Science and Engineering, UNSW Sydney
• In the third call of the recursion, we investigate two cells in each of three columns, and one cell of each of the other columns. These cells are marked in orange.
Since the investigated ranges overlap only at endpoints, the claim holds true. ■
So in each recursion call, at most 2N cells were investigated. The search space decreases by at least half after each call, so there are at most ⌈log2 N⌉ many recursion calls. Therefore, the total number of cells investigated is at most 2N⌈log2 N⌉.
Additional exercise: using the above reasoning, try to find a sharper bound for the total number of cells investigated. ■
Exercise 2.5. [H] Define the Fibonacci numbers as
F0 =0,F1 =1, andFn =Fn−1+Fn−2 foralln≥2.
Thus, the Fibonacci sequence is as follows:
0,1,1,2,3,5,8,13,21,…. (a) Show, by induction or otherwise, that
Fn+1 Fn 1 1n FF=10
for all integers n ≥ 1.
(b) Hence or otherwise, give an algorithm that finds Fn in O(log n) time.
Hint — You may wish to refer to Example 1.5 on page 28 of the “Review Material” booklet.
When n = 1 we have
Fn+1 Fn 1 1 FF=10
1 11 =10
COMP3121/9101 (Tutorial 2) School of Computer Science and Engineering, UNSW Sydney
so our claim is true for n = 1.
Let k ≥ 1 be an integer, and suppose our claim holds for n = k (Inductive Hypothesis). Then
1 1k+1 1 1k 1 1 10 =10 10
Fk+1 Fk 1 1 = −1 1 0
by the Inductive Hypothesis. Hence
1 1k+1 Fk+1+ +1 10 =Fk+Fk−1 Fk
Fk+2 Fk+1 = Fk+1 Fk
by the definition of the Fibonacci numbers. Hence, our claim is also true for n = k + 1, so by induction it is true for all integers n ≥ 1.
1 1 G=10.
Now we can use either of the methods from the previous problem. 1. We can use divide and conquer, recursively calculating
G2 ifniseven
G2 G ifnisodd.
At each of O(log n) steps of the recursion, we multiply either two or three 2-by-2 matrices in constant time, so the algorithm runs in O(log n).
2. We can instead compute G, G2, G4, . . . by repeated squaring, and multiply those terms which correspond to bits appearing in the binary representation of n. There are ⌊log2 n⌋ such terms to consider, so the algorithm again takes O(log n) time.
Exercise 2.6. [H] Let A be an array of n integers, not necessarily distinct or positive. Design a Θ(n log n) algorithm that returns the maximum sum found in any contiguous subarray of A.
Solution. Note that brute force takes Θ(n2).
The key insight to this problem is the following; once we split the original array in half, the maximum
subarray must occur in one of the following cases:
• the maximum subarray sum occurs entirely in the left half of the original array. • the maximum subarray sum occurs entirely in the right half of the original array.
Note — Note that there is an O(n) algorithm that solves this problem; however, the technique used in that solution is much more involved and will be taught in due time.
COMP3121/9101 (Tutorial 2) School of Computer Science and Engineering, UNSW Sydney • the maximum subarray sum overlaps across both halves.
We can compute the first two cases simply by making recursive calls on the left and right halves of the array. The third case can be computed in linear time. We now fill in the details of the algorithm.
Given an array of size N, split the array into smaller arrays of approximate size N/2 each. We can compute the maximum subarray sum in the left half using recursive calls on the left half of the array. Similarly, we can compute the maximum subarray sum in the right half using recursive calls on the right half of the array. To compute the maximum sum in the case where the subarray overlaps across both halves, we find the maximum sum of the subarray beginning at the midpoint and end at some point to the left of the midpoint. We then add this to the maximum sum of the subarray beginning at (midpoint + 1) and end at some point to the right of (midpoint + 1). This takes O(n) time. Our algorithm then returns the maximum of these three cases.
One caveat is that, if the maximum sum is negative, then our algorithm should return 0 to indicate that choosing an empty subarray is best. To compute the time complexity of the algorithm, note that we’re splitting our problem from size N to two N/2 problems and doing an O(n) overhead to combine the solutions together. This gives us the recurrence relation
T (n) = 2T (n/2) + O(n).
This falls under Case 2 of the Master Theorem, so T(n) = Θ(nlogn) is the total running time of the
algorithm. ■ §3 Karatsuba’s trick
Exercise 3.1. [K] Let P(x) = a0 + a1x and Q(x) = b0 + b1x, and define R(x) = P(x) · Q(x). Find the coefficients of R(x) using only three products of pairs of expressions each involving the coefficients ai and/or bj.
Addition and subtraction of the coefficients do not count towards this total of three products, nor do scalar multiples (i.e. products involving a coefficient and a constant).
Solution. We use (essentially) the Karatsuba trick:
R(x) = (a0 + a1x)(b0 + b1x)
= a0b0 + (a0b1 + b0a1)x + a1b1x2
=a0b0 +[(a0 +a1)(b0 +b1)−a0b0 −a1b1]x+a1b1x2.
Note that the last expression involves only three multiplications: a0b0, a1b1 and (a0 + a1)(b0 + b1). ■ Exercise 3.2. [K] Recall that the product of two complex numbers is given by
(a + ib)(c + id) = ac + iad + ibc + ß2bd = (ac−bd)+i(ad+bc)
(a) Multiply two complex numbers (a + ib) and (c + id) (where a, b, c, d are all real numbers) using only 3 real number multiplications.
(b) Find (a + ib)2 using only two multiplications of real numbers.
(c) Find the product (a + ib)2(c + id)2 using only five real number multiplications.
COMP3121/9101 (Tutorial 2) School of Computer Science and Engineering, UNSW Sydney Solution. (a) This is again the Karatsuba trick:
(a + ib)(c + id) = ac − bd + (bc + ad)i
= ac − bd + [(a + b)(c + d) − ac − bd] i,
so we need only 3 multiplications: ac and bd and (a + b)(c + d). (b) We again expand and refactorise:
(a+ib)2 = a2 −b2 +2abi
= (a+b)(a−b)+(a+a)bi.
Exercise 3.3. [H] Let P(x) = a0 +a1x+a2x2 +a3x3 and Q(x) = b0 +b1x+b2x2 +b3x3, and define R(x) = P (x) · Q(x). You are tasked to find the coefficients of R(x) using only twelve products of pairs of expressions each involving the coefficients ai and/or bj.
Challenge — Can you do better? Solution. We again use the Karatsuba trick:
R(x)=(a0 +a1x+a2x2 +a3x3)(b0 +b1x+b2x2 +b3x3)
= (a0 + a1x)(b0 + b1x) + (a0 + a1x)(b2 + b3x)x2
+ (a2 + a3x)(b0 + b1x)x2 + (a2 + a3x)(b2 + b3x)x4
=a0b0 +[(a0 +a1)(b0 +b1)−a0b0 −a1b1]x+a1b1x2 +a0b2 +[(a0 +a1)(b2 +b3)−a0b2 −a1b3]x+a1b3x2x2 +a2b0 +[(a2 +a3)(b0 +b1)−a2b0 −a3b1]x+a3b1x2x2 +a2b2 +[(a2 +a3)(b2 +b3)−a2b2 −a3b3]x+a3b3x2x4.
Note that each of the last four lines involves only three multiplications. Expanding and regrouping will yield the coefficients of R(x).
It is actually possible to solve the problem using only seven multiplications! ■ §4 Selection
Exercise 4.1. [K] Suppose we modify the median of medians quickselect algorithm to use blocks of 7 rather than 5. Determine the appropriate worst-case recurrence, and solve for the asymptotic growth rate.
Solution. We first recurse to find the median of the n/7 block medians. That median is guaranteed to be greater than four elements in each of n/14 blocks, and less than four elements in each of n/14 blocks.
(c) Observe that
Therefore, we can now use (a) to multiply (a + ib)(c + id) with only three mul
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com