CS计算机代考程序代写 data structure algorithm CMPSC 465 Data Structures & Algorithms

CMPSC 465 Data Structures & Algorithms
Fall 2021 and HW 3

1. (20 pts.) Problem 1

(a) After Build-Heap: [1,5,2,7,6,4,3,9,8,10]
After 1st iteration of the loop: [2,5,3,7,6,4,10,9,8,1]
After 2nd iteration of the loop: [3,5,4,7,6,8,10,9,2,1]
After 3rd iteration of the loop: [4,5,8,7,6,9,10,3,2,1]

(b) Proof: Assume the element at position bn2c is an internal node, not a leaf. Then, its left child should
be at position 2 · bn2c+ 1 ≥ 2 ·

n
2 − 1+ 1 = n > n− 1, which is the index of the last element in heap.

Therefore, there is a contradiction. Thus, the element at position bn2c can’t have any children, meaning
it must be a leaf. The same reasoning can be applied to other positions in the statement since they are
even larger than bn2c. So, the nodes at positions b

n
2c, . . . ,n−1 must be the leaves.

(c) Consider an array [2,3,4,1]. If we increase i from 0 to bn2c−1, there is no swap in the first iteration.
In the second iteration, 3 swaps with 1. Then, the loop and the Build-Heap procedure are finished with
the array [2,1,4,3] which doesn’t satisfy the min heap property since 2 is greater than its left child 1.
So, increasing i from 0 to bn2c−1 can’t guarantee that the array we end up obtaining from Build-Heap
has the min heap property.

2. (15 pts.) Problem 2

We keep M+1 counters, one for each of the possible values of the array elements. We can use these counters
to compute the number of elements of each value by a single O(n)-time pass through the array. Then, we
can obtain a sorted version of x by filling a new array with the prescribed numbers of elements of each value,
looping through the values in ascending order. Notice that the Ω(n logn) bound does not apply in this case,
as this algorithm is not comparison-based.

3. (15 pts.) Problem 3

(a) We get the recurrence T (n) = T (n/2)+Θ(1) for the running time of Binary Search. Applying the
master theorem with a = 1, b = 2, d = 0 and logb a = 0, we note that we are in the second case, and so
Θ(nlog2 1 log(n)) = Θ(log(n))

(b) For Ternary Search the recurrence is T (n) = T (n/3)+Θ(1). Applying the master theorem with a = 1,
b = 3, and d = 0, we have that log3 1 = 0 = d. So, we are in the second case, and we have that
Θ(nlog3 1 log(n)) = Θ(log(n))

4. (20 pts.) Problem 4

Let m = bn2c, first we divide the input sequence in two halves (A1 = a1, . . . ,am and A2 = am+1, . . . ,an). Also,
let T (n) be the time it takes to sort the input sequence of n numbers and to count the number of inversions.
First, we recursively sort, and count each half, which takes 2T (n2). Then, we should merge two halves, and
count the number of inversions between the two halves. We can do this as follows. Remember that the two
halves are A1 and A2, and let B1, and B2, be the output of recursive call on A1 and A2. Let i be the pointer
to array B1, and j be the pointer to array B2. Also initialize i and j to zero (for example i = 0 points to first
element of array B1). (Merge part:) Compare the first elements of B1, and B2, and increase the pointer of
the smaller element by one, and put it in output array S.(Count part:) Also, whenever we increase the B2

CMPSC 465, Fall 2021, HW 3 1

pointer by one, we increase the total number of inversions by length(B1)− i, since whenever we increase
j ( or B2 pointer) by one, B2[ j] should be less than B1[i :

n
2 ], which makes length(B1)− i inversions. To

get more details, You can see the pseudocode in algorithm 1.

Algorithm 1 sort, and count
Input: an array A = [a1,a2, . . . ,an], and its length n
function SORT-COUNT(A, n)

if n == 1 then return A[0],0
end if
A1← [a1, . . . ,ab n2 c], A2← [ab n2 c+1, . . . ,an]
B1,cnt1← sort-count(A1,bn2c)
B2,cnt2← sort-count(A2,dn2e)
i← 0
j← 0
S← an array of length n
k← 0
cnt← 0
while i < length(B1) and j < length(B2) do if B2[ j]≥ B1[i] then S[k] = B1[i], i← i+1 else S[k] = B2[ j], j← j+1, cnt← cnt +length(B1)− i end if k← k+1 if i = length(B1) then S[k : n−1] = B2[k : n−1] end if if j = length(B2) then S[k : n−1] = B1[k : n−1] end if end while return S, cnt1 + cnt2 + cnt end function Now since the merge part takes at most n iteration of while loop, we can say the running time for merge is O(n). So, the recursive relation would be: T (n) = 2T ( n 2 )+O(n) (1) If we use the master theorem for the above recursive relation, we have T (n) = O(n logn) 5. (20 pts.) Problem 5 All recurrences can be solved by unrolling the recurrence and carefully arranging the terms, following the level-by-level approach from lectures (which is equivalent), or using various form of the Master Theorem. However, substitution or induction method is sometimes more convenient to showing the recurrence: (a) In class, we saw that if T (n) = a ·T (n/b)+O(nd) for some a > 0,b > 1 and d >= 0, then:

CMPSC 465, Fall 2021, HW 3 2

T (n) =



O
(
nd
)

if d > logb a
O
(
nd logn

)
if d = logb a

O
(
nlogb a

)
if d < logb a Using master Theorem (a = 9,b = 3,d = 2), we get T (n) = O(n2 logn). (b) T (n) = 2T (n/5)+ √ n = O( √ n) by the Master theorem (a = 2,b = 5,d = 12 ). (c) T (n) = T (n−1)+ cn = ∑ni=1 c i +T (0) = c( c n−1 c−1 )+T (0) = O(c n). (d) T (n) = T (n−1)+nc = ∑ni=1 i c +T (0) = O(nc+1). (e) T (n) = 5T (n/4)+n = O(nlog4 5) by the Master theorem. (f) Unrolling, we have T (n) = 1.5n +1.5n/2 +1.5n/4 +1.5n/8 + . . . Then T (n)≤ ∑ni=0 1.5 i = O(1.5n). Hence, T (n) = O(1.5n). (g) T (n) = 49T (n/25)+n3/2 logn = O(n3/2 logn). By unrolling we have: log25 n ∑ i=0 ( 49 253/2 )i n3/2 log ( n 25i ) < log25 n ∑ i=0 ( 49 125 )i O(n3/2 logn) = O(n3/2 logn) log25 n ∑ i=0 ( 49 125 )i = O(n3/2 logn)O(1) = O(n3/2 logn) (2) Note that in latter equation we used the fact that the geometric series will be O(1), since 49125 < 1. (h) By induction we will show that T (n) = O(n). Assume for k≤ n−1, T (k)≤ ck, where c≥ 8. Then we have: T (n) = T ( n 2 )+T ( n 4 )+T ( n 8 )+n≤ 7 8 cn+n≤ cn (3) As a result, we proved that T (n)≤ 8n, which means that T (n) = O(n). (i) • First method (Unrolling). Let us consider the upper bound and suppose T (n) ≤ T (3n/5) + T (2n/5) + cn for a suitable constant c > 0. Note by unrolling that the size of each sub-problem at level k is of the form
Sk,i =

(
3
5

)i (2
5

)k−i
n for some i = 0, . . . ,k. Moreover, there are

(k
i

)
sub-problems of size Sk,i at level

k. The height of the recurrence tree is log5/2 n. Hence,

T (n)≤
log5/2 n


k=0

k


i=0

(
k
i

)
c
(

3
5

)i
c
(

2
5

)k−i
n =

log5/2 n


k=0

c2n = O(n logn).

• Second method (Induction).
As the induction step, assume that for all k < n, we have T (k) ≤ c1k logk. We want to show that there exists c1 such that for all large enough n, T (n) ≤ c1n logn. Since both 2n5 , and 3n 5 are less than n according to induction assumption we have: T (2n5 )≤ c1 2n 5 log 2n 5 , and T ( 3n 5 )≤ c1 3n 5 log 3n 5 . CMPSC 465, Fall 2021, HW 3 3 Now using recurrence relation for T (n) we can write: T (n)≤ T ( 3n 5 )+T ( 2n 5 )+ cn≤ c1 3n 5 log 3n 5 + c1 2n 5 log 2n 5 + cn = c1 3n 5 log 3 5 + c1 3n 5 logn+ c1 2n 5 log 2 5 + c1 2n 5 logn+ cn = c1n logn+ c1n( 3 5 log 3 5 + 2 5 log 2 5 )+ cn = c1n logn+ cn− c1n( 3 5 log 5 3 + 5 2 log 2 5 ) (4) Now, note that if we let c1 = c 3 5 log 5 3+ 5 2 log 2 5 , and replace it on latter equation, we get cn−c1n(35 log 5 3 + 5 2 log 2 5) = 0, which means that T (n) ≤ c1n logn. So, induction step is complete, and we proved that T (n) = O(n logn) (j) Note by unfolding that T (n) = n 1 2+ 1 4+···+ 1 2k T (n 1 2k )+10kn. (This can also be proved, for example, by induction.) Now if we let k = log2 log2 n, we have n 1 2k = 2 which is constant. Since 12 ≤ 1 2 + 1 4 + · · ·+ 1 2k ≤ 1, we have T (n) = Θ(n log logn). CMPSC 465, Fall 2021, HW 3 4 Rubric: Problem 1, total points 20 (a) 6 points 3 points: provide correct resulting array for Build-Heap 1 point: provide correct resulting array for the 1st iteration 1 point: provide correct resulting array for the 2nd iteration 1 point: provide correct resulting array for the 3rd iteration (b) 7 points 4 points: explain the child issue 3 points: explain the child issue is applicable to all the elements in the specified positions (c) 7 points 3 points: provide a valid counterexample for increasing i from 0 to bn2c−1 4 points: explain why this increasing i from 0 to bn2c−1 fails on this counterexample Problem 2, 15 pts • 8 points for proof of time complexity • 7 points for the justification of why Ω(n logn) doesn’t apply to the case where M is small and the time is linear time. Problem 3,15 pts • 4 points for recurrence T (n) = T (n/2)+Θ(1), 4 points for running time Θ(n) • 3.5 points for recurrence T (n) = T (n/3)+Θ(1), 3.5 points for running time Θ(n) Problem 4, 20 pts. The algorithm includes two recursive call, and then a merge and count part. Designing a correct merge and count procedure: 5 The correctness of algorithm: 5 Deriving the recurrence relation: 5 Showing the run-time from the recurrence relation: 5 Problem 5, 30 pts There are total of 10 parts, each part is worth 3 pts. For the parts that can be solve by master theorem, only correct answer get the full points, and there is not partial credit for these parts. For, the parts that master theorem can not be applied, unrolling correctly is worth 1.5 pts, and showing the final answer is worth 1.5 pts. CMPSC 465, Fall 2021, HW 3 5