CS计算机代考程序代写 AI algorithm Tutorial

Tutorial
COMP20007 Design of Algorithms
Week 8 Workshop Solutions
1. Simple Sorting Algorithms
(a) Selection Sort. When running selection sort the key idea is that for each i from 0 to n − 2 we find the smallest (or largest if we’re sorting in descending order) element in A[i . . . n − 1] and swap it with A[i].
(i) We’ll show the elements at the start of the array which we have already sorted (i.e., the elements A[0…i−1]) in bold, and the minimum element in A[i…n−1] in red.
􏰗A N A L Y S I S􏰘 􏰗A N A L Y S I S􏰘 􏰗A N A L Y S I S􏰘 􏰗A A N L Y S I S􏰘 􏰗A A N L Y S I S􏰘 􏰗A A I L Y S N S􏰘 􏰗A A I L Y S N S􏰘 􏰗A A I L Y S N S􏰘 􏰗A A I L Y S N S􏰘 􏰗A A I L N S Y S􏰘 􏰗A A I L N S Y S􏰘 􏰗A A I L N S Y S􏰘 􏰗A A I L N S Y S􏰘 􏰗A A I L N S S Y􏰘
(ii) At each i ∈ {0,…,n−2} we must take the minimum element from the array A[i…n−1],
1

which requires n − 1 − i comparisons. So the total number of comparisons, C(n), will be:
n−2 􏰐(n−1−i)
C(n) =
= 􏰐(n − 1) − 􏰐 i
i=0
n−2 n−2
i=0 i=0
= (n − 1)(n − 1) − (n − 2)(n − 1)
2
= 21􏰄2n2 −4n+2−n2 +3n−2􏰅
= 1􏰄n2 −n􏰅 2
∈ Θ(n2)
(iii) Selection sort is not stable. Consider the following counter example (where 1a and 1b are
used to differentiate between the two elements with the same value).
􏰗1a 1b 0􏰘
􏰗1a 1b 0􏰘
􏰗0 1 1 􏰘 ba
􏰗0 1 1 􏰘 ba
􏰗01 1􏰘 ba
Since 1a and 1b end up out of order relative to one another, this sorting algorithm is not stable.
(iv) Yes, selection sort sorts in-place, as it requires only O(1) additional memory.
(v) No selection sort is not input sensitive, the number of comparisons is a function of n only
and the order of the elements has no effect.
(b) Insertion Sort. The key idea of insertion sort is that we have some section at the start of the array which is sorted (initially only A[0]) and we repeatedly insert the next element into the correct position in this sorted segment, until the whole array is sorted.
More precisely, with i going from 1 to n−1 we have A[0…i−1] already sorted, and we swap A[i] backwards until it is in the correct position in the sorted section A[0 . . . i] (i.e., it’s greater then the element immediately preceding it).
(i) For each i we’ll show the sorted section in bold, and then when we’re performing the insertion we’ll show the element we’re inserting in red, and the element we’re compar-
2

ing/swapping
􏰗A N A L Y S I S􏰘 􏰗A N A L Y S I S􏰘 􏰗A N A L Y S I S􏰘 􏰗A N A L Y S I S􏰘 􏰗A A N L Y S I S􏰘 􏰗A A N L Y S I S􏰘 􏰗A A N L Y S I S􏰘 􏰗A A N L Y S I S􏰘 􏰗A A L N Y S I S􏰘 􏰗A A L N Y S I S􏰘 􏰗A A L N Y S I S􏰘 􏰗A A L N Y S I S􏰘 􏰗A A L N Y S I S􏰘 􏰗A A L N Y S I S􏰘 􏰗A A L N S Y I S􏰘 􏰗A A L N S Y I S􏰘 􏰗A A L N S Y I S􏰘 􏰗A A L N S Y I S􏰘 􏰗A A L N S I Y S􏰘 􏰗A A L N S I Y S􏰘 􏰗A A L N I S Y S􏰘 􏰗A A L N I S Y S􏰘 􏰗A A L I N S Y S􏰘 􏰗A A L I N S Y S􏰘 􏰗A A I L N S Y S􏰘 􏰗A A I L N S Y S􏰘 􏰗A A I L N S Y S􏰘 􏰗A A I L N S S Y􏰘 􏰗A A I L N S S Y􏰘
3

(ii) The best case for insertion sort is when the array is already sorted and there are no swaps to be made. There are exactly n − 1 comparisons made in this case, thus insertion sort is Ω(n).
On the other hard, the worst case input is when the input is sorted in reverse order, and for each i ∈ {1,…,n − 1} there are i swaps to be made (and hence i comparisons), thus the number of comparisons is:
n−1 n(n − 1)
C(n) = 􏰐 i = 2 ∈ O(n2).
i=1
So the worst case time complexity of insertion sort is O(n2).
(iii) Insertion sort is stable. The only way two elements relative order will be reversed is if they are swapped with each other directly. Since we do not swap elements of equal value it must be the case that relative orderings of elements with equal value must be maintained.
(iv) Insertion sort does sort in place, there is only a constant amount of additional memory required.
(v) Insertion sort is input sensitive, as we can see from (iii) where we show that the time complexity is different depending on how the input is arranged.
(c) Quicksort (with Lomuto partitioning). Quicksort is a recursive algorithm which repeatedly par- titions the input array and recursively quicksorts the left and right portions of the array which < p and ≥ p for the pivot p, respectively. The Quicksort algorithm is: function Quicksort(A[l . . . r]) if l < r then s ← Partition(A[l . . . r]) Quicksort(A[l . . . s − 1]) Quicksort(A[s + 1 . . . r]) The Lomuto partitioning algorithm is: function LomutoPartition(A[l . . . r]) p ← A[l] s←l for i ← l + 1 to r do if A[i] < p then s←s+1 Swap(A[s], A[i]) Swap(A[l], A[s]) return s (i) Performing the algorithm on [A N A L Y S I S]: Partition [A N A L Y S I S] 􏰗A N ALYSIS􏰘→􏰗A NA LYSIS􏰘→...→􏰗A NALYSIS􏰘 sisisi So s = 1 and we’re left with (where bold characters are already fixed in place): 􏰗A N A L Y S I S􏰘 4 So partitioning [N A L Y S I S]: 􏰗Ns Ai LYSIS􏰘→􏰗NAs,i LYSIS􏰘→􏰗NAs Li YSIS􏰘 →􏰗NALs,i YSIS􏰘→􏰗NALs Yi SIS􏰘→􏰗NALs YSi IS􏰘 →􏰗NALs YSIi S􏰘→􏰗NALYs SIi S􏰘 →􏰗NALI SY S􏰘→􏰗NALI SYS􏰘→􏰗NALI SYS􏰘→􏰗IALN SYS􏰘 sisiss So we have, 􏰗A I A L N S Y S􏰘 So we need to recursively Quicksort [I A L] and [S Y S]. Starting with [I A L]: 􏰗I A L􏰘→􏰗IA L􏰘→􏰗IA L􏰘 si s,i si →􏰗IAs L􏰘→􏰗AIs L􏰘 So we have [A I L], and we recursively quicksort [A] and [L] (the base case, so we do nothing). Now we have [A A I L N S Y S] and we need to quicksort [S Y S]: 􏰗S Y S􏰘→􏰗S YS􏰘→􏰗S YS􏰘 sisis Nothing was moved, but we now have [A A I L N S Y S] and we need to recursively quicksort [Y S]: 􏰗Y S􏰘→􏰗YS 􏰘→􏰗YS 􏰘→􏰗SY 􏰘 si s,i s s So we have [A A I L N S S Y] and the final recursive call for [Y] hits the base case, and we’refinished! Sothefinalsortedarrayis[AAILNSSY] (ii) From the lectures Quicksort is Θ(n log n) in the best case and Θ(n2) in the worst case (for example when the array is in reverse sorted order we do n partitions of cost Θ(n)). (iii) Quicksort with Lomuto partitioning is not stable. Consider the counter example: 􏰗21a 1b􏰘→􏰗2 1a 1b􏰘→􏰗21a 1b􏰘→􏰗21a 1b􏰘→􏰗21a 1b 􏰘→􏰗21a 1b􏰘→􏰗1b 1a 2􏰘 After the first partition we’re left with [1b 1a] to recursively quicksort. Partitioning this with Lomuto partitioning leaves it as is (check this!) and so our input [2 1a 1b] becomes [1b 1a 2] after being quicksorted. The relative order of the 1s was not preserved so this algorithm must not be stable. (iv) The algorithm is in place. Here we’re allowing the O(log n) space required for the function stack when we do the recursive calls. (v) This algorithm is input sensitive as demonstrated by the different best and worst case time complexities depending on the input order. si s,i si s,i s s 5 2. Heaps (a) As in the lectures we leave the first index empty, and then populate the array with the elements of the heap in level-order: 􏰗−,1,3,9,4,5,15,14,8,6,10􏰘 (b) First we store the value of the root of the heap, in this case it is 1, then we swap the last element in the heap to the root: 10 39 4 5 15 14 86 We then SiftDown the root, by comparing it to each of its children, and if it’s larger than either swap it with the smaller child and repeat, if it’s smaller than both children then stop. 10 39 4 5 15 14 86 4 5 15 14 86 3 10 9 6 3 49 10 86 6 5 15 14 8 10 (c) We’ll add the new value in a new node at the next spot in the tree, and then SiftUp. Sifting up consists of swapping the node with its parent as long its parent is larger than it. 3 49 6 5 15 14 8 10 2 3 49 6 2 15 14 8 10 5 5 15 14 3 49 7 3 29 6 4 15 14 8 10 5 2 39 6 4 15 14 8 10 5 3. (Optional) Using a min-heap for kth-smallest element which finds the kth-smallest element using a min-heap. function HeapkthSmallest(A[0 . . . n − 1], k) Heap ← Heapify(A[0 . . . n − 1], k) // remove the smallest k−1 elements for i ← 0 . . . k − 2 do RemoveMin(H eap) return RemoveMin(Heap) Consider the following algorithm The time complexity of Heapify is Θ(n) and each RemoveMin is Θ(log n), so the total time com- plexity of this algorithm is Θ(n + k log n). Notice that this algorithm is not input-sensitive on the order of the array, it is only input sensitive with respect to k. This may be a desirable property for a variety of reasons. 4. (Optional) Quickselect Quickselect is an algorithm for solving the kth-smallest element prob- lem using the Partition algorithm from quicksort. (a) We know that the pivot must occupy index p in the sorted array. We can make use of this fact and only search the section of the array which must contain the element which has the kth index in the sorted array. The pseudocode for quickselect is as follows. function Quickselect(A[0 . . . n − 1], k) pivot ← SelectPivot(A[0 . . . n − 1]) // p is the index of the pivot in the partitioned array p ← Partition(A[0 . . . n − 1], pivot) if p == k then return A[k] elseif p>kthen
return Quickselect(A[0 . . . p − 1], k) 8

(b)
else
return Quickselect(A[p+1…n−1],k−(p+1))
We will assume for this question that we are selecting the first element to be the pivot, but as we have seen in lectures this is not necessarily the best strategy.
􏰗 􏰘 Partition 􏰗 􏰘 9,3,2,15,10,29,7 −→ 3,2,7,9,15,10,29 , p = 3 < k k ← k − (p + 1) = 0 􏰗 􏰘 Partition 􏰗 􏰘 15,10,29 −→ 10,15,29 , p = 1 > k k←k=0
In this case, the recurrence relation for Quickselect will be: T(n)=Θ(n)+T􏰄n􏰅, T(1)=1
􏰗 􏰘 Partition 􏰗 􏰘
10 −→ 10 , p = 0 == k
=⇒ return 10
(c) The best-case for Quickselect is when p = k after the first partition. With our pivot strategy of selecting pivot = A[0] this corresponds to having the kth-smallest element at the start of the array.
The single Partition call takes Θ(n) time, so the best-case time complexity for Quickselect is Θ(n).
(d) The worst-case for Quickselect is when each call to Partition only rules out one element, and thus we have to Partition on n elements, n − 1 elements, n − 2 elements and so on.
An example of this is finding k = 0 when the input is reverse sorted order.
The time taken for each partition is linear, so the total time for Quickselect in the worst case
is,
(e) To compute the expected time-complexity of this algorithm we will make the assumption that the order of the array is uniformly random, and that after each Partition, the pivot ends up in the middle of the array, i.e., p = n2 .
􏰐n n(n + 1) 2
i = 2 ∈ Θ(n ). i=1
2
We see that this first the format required by the Master theorem with a = 1, b = 2, c = 1. Since
logb(1) = 0 < 1 = c we have that T(n) ∈ Θ(nc) = Θ(n). Thus Quickselect has an expected time-complexity of Θ(n). It turns out that even if the array is split into very unevenly sized components after partitioning (e.g., 0.99n and 0.01n length sub-arrays) and the kth smallest element always ends up in the larger array then the runtime of Quickselect is still Θ(n). Can you use the Master Theorem to show why this is the case? (f) Well the worst case for Quickselect is much worse than the worst case for the heap based algorithm (Θ(n2) vs. Θ(n log n)). Also, if we make the assumption that k ≪ n (i.e., k is very small compared to n) then we can treat Θ(n + k log n) as Θ(n) meaning the heap-based approach is comparable to the best case for Quickselect. In general for unknown k we can expect Quicksort to be faster in the expected case. 9