CS计算机代考程序代写 algorithm EECS4101M Solutions to Assignment 1 Winter 2021 York University Instructor: Andy Mirzaian

EECS4101M Solutions to Assignment 1 Winter 2021 York University Instructor: Andy Mirzaian
(32%) Lecture Slide 1, Exercise 2. Binary Counter with Increment and Reset.
We introduce a new field max[A] to hold the index of the high-order 1 in A. max[A] is initially -1, since the low- order bit of A is at index 0, and there are initially no 1-bits in A. max[A] is updated as appropriate when the counter is incremented or reset, and is used to limit how much of A must be looked at to reset it. By controlling the cost of RESET in this way, we can limit it to an amount that can be covered by stored credit from earlier INCREMENTs.
procedure INCREMENT(A)
i←0
while i < length[A] and A[i] = 1 do A[i] ← 0 i←i+1 end while if i ≥ length[A] then max[A]←−1 else do A[i] ← 1 if max[A] < i then max[A] ← i end else end{INCREMENT } procedure RESET(A) for i←0..max[A] do A[i]←0 max[A] ← − 1 end{ RESET } Amortized Analysis: As before, we assume that it costs 1 dollar to probe or flip a bit. In addition we assume it costs 1 dollar to update max[A]. Setting/resetting of bits by INCREMENT will work exactly as before. Credit Invariant: for each bit position from lowest order up to max[A], each 0-bit has 1 dollar and each 1-bit has 2 dollars stored credit. Amortized Cost: We will see that charging 3 dollars for each INCREMENT and 1 dollar for each RESET is suf- ficient to maintain this Credit Invariant. INCREMENT: For each 1-bit that is flipped to 0, one of its 2 stored dollar credits pays for the flip. 1 of the alloted 3 dollars will pay to set one 0-to-1 bit flip; another 1 alloted dollar will be added to the stored credit of that 1-bit. In addition, if that bit was a new high order 1-bit, the 3rd alloted dollar will also be stored with that bit and update max[A]. (Note 1:If max[A] doesn’t increase, we can just waste that last 1 dollar, it won’t be needed.) RESET manipulates bits at positions 0 . . max[ A], and each such bit must have at least 1 dollar stored credit which is sufficient to pay for its access/flip. The alloted 1 dollar will pay for resetting max[A]. (Note 2: this 1 alloted dollar is needed to pay for the RESET cost, even if all the bits were already zero! If we assume RESET is never called on the counter with all bits 0, then the amortized cost of RESET is 0 dollar, since the extra stored credit of the 1-bit at position max[A] can pay for the needed 1 dollar cost.) Thus, charging 3 dollars for each INCREMENT and 1 dollar for each RESET is sufficient. So the sequence of n INCREMENT and RESET operations takes O(n) time. Exercise: Analyze by the potential function method, using the regular potential Φ(A) = 1 + max[A] + Σ A[i]. i 1 -2- (32%) LS2, Ex. 3: Amortized bound in Theorem 2 is tight: Theorem 2 of LS2 established an upper-bound on the competitiveness ratio of MF. This question asks us to estab- lish an essentially matching lower-bound on that ratio to show its tightness. To accomplish that, we have to demonstrate the existance of a sequence s of dictionary operations, applied to an CMF(s) CA(s) Let s be any sequence of m dictionary operations applied to an initially empty list. For any algorithm A let CA(s) = C−A(s) + XA(s) be the total cost of algorithm A on sequence s including paid exchanges. Note that C−MF(s) = CMF(s). We wish to compare the Move-to-Front (MF) heuristic against the optimum off-line algorithm A for the given sequence s. In this dynamic setting, there is no simple way to characterize what algorithm A would be optimal for s, nor to characterize which s would maximize the competitiveness ratio stated above. (Algorithm A highly depends on s. It’s not necessarily something as simple as the DF algorithm.) To circumvent this difficulty, let’s first consider another algorithm, algorithm B, which executes sequence s with- out using any exchanges (free or paid). So, CB(s)=C−B(s). From Theorem 2 of LS2 we know that CMF(s) ≤ 2CA(s), and furthermore, since A is optimum for s, we know that CA(s) ≤ CB(s). Hence, CMF (s) CMF (s) 2≥ ≥ . (1) CA(s) CB(s) So, all we need to do is to choose a sequence s carefully and show that (a lower bound on) CMF(s)/CB(s) approaches 2 as m gets asymptotically large. That would imply that CMF(s)/CA(s) must also approach 2 for the specified sequence s above. To accomplish this, we consider a specific sequence s described as follows. Let m be a given arbitrary but suffi- ciently large integer. Without loss of generality assume m is even and n = m/2. The particular sequence s that we choose is: INSERT items x1 , x2 , ... , xn, in that order, followed by DELETE items x1 , x2 , ... , xn, in that order. The access cost of the i-th insertion is i units for any algorithm, including MF and B (and A also!). (Why? Because according to the rules of sequential lists, the algorithm has to start from the head of the list; sequentially 2 initially empty list, that maximizes the competitiveness ratio for sequence s, and show this ratio tends to 2. , where A is the optimum off-line algorithm scan through the list to the end to verify that the item is not on the list; insert the new item xi at the end of the list; n and intermitently do paid/free exchanges.) So, the total insertion cost is Σ i = n ( n + 1) / 2 for both MF and B. i=1 At this point the list configurations for these algorithms are LMF =(xn , xn−1 ,..., x2 , x1) and LB =(x1 , x2 ,...,, xn). Now consider the n deletions. Each item to be deleted by MF is at the end of MF’s list. So, the total deletion cost for MF is also n ( n + 1) / 2. Thus, C MF ( s ) = n ( n + 1) / 2 + n ( n + 1) / 2 = n ( n + 1) . (2) Each item to be deleted by B is at the front of its list and hence it costs 1 unit, for a total of n units for the n dele- tions. Thus, From eqs. (2) and (3), we conclude: C B ( s ) = n ( n + 1) / 2 + n . CMF (s) C B ( s ) (3) n(n + 1) ==2−. (4) 4 n ( n + 1) / 2 + n n + 3 From (1) and (4) we conclude: CMF(s) 4 2≥ ≥2− . (5) CA(s) n+3 3 a if x ≠ nil then push(S, x) ; x ← left[x] else x ← pop(S) ; visit(x) ; x ← right[x] 3. 4. 5. 6. end while end -3- We see that the gap between the lower and upper bound in (5) goes to 0 as m (and hence n) goes to infinity. Therefore, the competitiveness ratio 2 is asymptotically tight. (36%) LS3, Ex. 8: In-place Iterative Inorder Traversal: Below we will show 3 solutions: with parent pointers, stack, and threading. With parent pointers: This is given on page 22 of LS3. It uses the Successor and Minimum functions on pages 20-21 of LS3. The inorder sequence is the minimum node followed by n − 1 successive calls to the succes- sor function, where n is the number of keys (i.e., number of nodes) in T . Each link of the tree is traversed twice (once in each direction) by the above algorithm. Since there are n − 1 edges in a tree with n nodes, the algorithm takes O(n) time. Algorithm Inorder1 (T) 1. x ← Minimum(root[T]) 2.while x≠nil do 3. visit(x) 4. x ← Successor(x) 5. end while end A stack and no parent pointers: Here, we can use a stack to store the nodes on the current path (i.e., the ancestors of the current node). This helps to back up to the parent by popping from the stack. The asymptotic time complexity remains linear in the size of the tree. We can further simplify the code if instead we think of the recursion-stack in the recursive version of the inorder traversal algorithm: Algorithm Inorder2 (T) 1.x←root[T] ; S←∅ 2.while x≠nil or S≠∅ do Threading, no stack, no parent pointers: Inorder traversal successively moves from a visited node to its suc- cessor. We can temporarily thread the tree to form access links from nodes to their successors if the latter is an ancestor of the former. See the algorithm below. Specifically, consider a node y, such that right[y] = nil. Let x be successor of y (y is predecessor of x). So, y is the max node in the left subtree of x. We can momentarily set the thread right[y] = x when we reach x by first descending to y from it (lines 5-8). Later on, after visiting y, we can use this thread pointer to jump from y to its successor x and then, by retraversing the path from x to y, we restore the thread right[y] back to its nil state (lines 4-9). So, by the end of the algorithm, the (pointer) structure of tree T remains intact. In this way, each edge of T will be traversed up to 2 more times for the threading and unthreading process. So, the total time remains O(n). b Algorithm Inorder3 (T) 1. x ← root[T] 2.while x≠nil do if left[x] = nil then visit(x) ; x ← right[x] else do 3. 4. 5. 6. 7. 8. 9. 10. 11. end while end end else -4- y ← left[x] while right[y] ≠ nil and right[y] ≠ x do y ← right[y] if right[y] = nil then right[y] ← x ; x ← left[x] else right[y] ← nil ; visit(x) ; x ← right[x]