程序代写代做代考 c/c++ algorithm graph Java go CPSC 320 2020W1: Assignment 3

CPSC 320 2020W1: Assignment 3
This assignment is due Sunday, November 1st, 2020 at 22:00 Pacific Time. Assignments sub- mitted within 24 hours after the deadline will be accepted, but a penalty of 15% will be applied. Please follow these guidelines:
􏰌 Prepare your solution using LATEX, and submit a pdf file. Easiest will be to use the .tex file provided. For questions where you need to select a circle, you can simply change \fillinMCmath to \fillinMCmathsoln .
􏰌 Enclose each paragraph of your solution with \begin{soln}Your solution here…\end{soln}.Your solution will then appear in dark blue, making it a lot easier for TAs to find what you wrote.
􏰌 Start each problem on a new page, using the same numbering and ordering as this assignment handout.
􏰌 Submit the assignment via GradeScope at https://gradescope.ca. Your group must make a single submission via one group member’s account, marking all other group members in that submission using GradeScope’s interface.
􏰌 After uploading to Gradescope, link each question with the page of your pdf containing your solution.
There are instructions for doing this on the CPSC 121 website, see https://www.students.cs.ubc.ca/~cs-121/2020W Ignore the statement about group size that is on that page.
Before we begin, a few notes on pseudocode throughout CPSC 320: Your pseudocode should commu- nicate your algorithm clearly, concisely, correctly, and without irrelevant detail. Reasonable use of plain English is fine in such pseudocode. You should envision your audience as a capable CPSC 320 student unfamiliar with the problem you are solving. If you choose to use actual code, note that you may neither include what we consider to be irrelevant detail nor assume that we understand the particular language you chose. (So, for example, do not write #include at the start of your pseudocode, and avoid language-specific notation like C/C++/Java’s ternary (question-mark-colon) operator.)
Remember also to justify/explain your answers. We understand that gauging how much justifi- cation to provide can be tricky. Inevitably, judgment is applied by both student and grader as to how much is enough, or too much, and it’s frustrating for all concerned when judgments don’t align. Justifica- tions/explanations need not be long or formal, but should be clear and specific (referring back to lines of pseudocode, for example). Proofs should be a bit more formal.
On the plus side, if you choose an incorrect answer when selecting an option but your reasoning shows partial understanding, you might get more marks than if no justification is provided. And the effort you expend in writing down the justification will hopefully help you gain deeper understanding and may well help you converge to the right selection :).
Ok, time to get started…
1

1 Statement on Collaboration and Use of Resources
To develop good practices in doing homeworks, citing resources and acknowledging input from others, please complete the following. This question is worth 2 marks.
1. All group members have read and followed the guidelines for groupwork on assignments in CPSC 320 (see https://www.students.cs.ubc.ca/~cs-320/2020W1/index.php?page=assignments&menu=1&submenu=3).
Yes No
2. We used the following resources (list books, online sources, etc. that you consulted):
3. One or more of us consulted with course staff during office hours. Yes No
4. One or more of us collaborated with other CPSC 320 students; none of us took written notes during our consultations and we took at least a half-hour break afterwards.
Yes No
If yes, please list their name(s) here:
5. One or more of us collaborated with or consulted others outside of CPSC 320; none of us took written notes during our consultations and we took at least a half-hour break afterwards.
Yes No
If yes, please list their name(s) here:
2

2 Recurrence relations and the Master theorem
Determine whether or not each of the following recurrence relations can be solved by applying the Master theorem. Justify why or why not, and in the cases where the recurrence can be solved by applying the Master theorem, give a Θ bound on the solution to the recurrence. Assume that in all cases, T (n) ∈ Θ(1) when n is sufficiently small.
1. [4 marks] T (n) = 4T (⌊n/2⌋) + nρ(n) where ρ(n) is the number of 1’s in the binary representation of n.
Solution: We have a = 4, b = 2 and f(n) = nρ(n). Clearly logba = log24 = 2, and f(n) ≤ nlogn, and hence f(n) ∈ O(n2−ε) for ε = 0.5. This means we are in case 1 of the Master Theorem, and so T (n) ∈ Θ(n2).
2. [4 marks] T (n) = 2T (⌊n/4⌋) + nρ(n) where ρ(n) is the number of 1’s in the binary representation of n.
Solution: We will show that Case 3 of the Master theorem can be applied to this recurrence. We have a=2,b=4andf(n)=nρ(n). Clearlyf(n)≥n,andhencef(n)∈Ω(n0.5+ε)forε=0.5. Nowwe check the regularity condition. The binary representation of ⌊n/4⌋ is the same as that of n, except that the last bit is missing. Hence ρ(⌊n/4⌋) ≤ ρ(n), which means that
= 2f (⌊n/4⌋)
= 2⌊n/4⌋2 ρ(⌊n/4⌋) ≤ 2(n/4)ρ(n)
nρ(n) 2
af (⌊n/b⌋)
and so af(n/b) < 0.9f(n). Hence the regularity condition holds, which means that T(n) ∈ Θ(nρ(n)). 3. [4 marks] T (n) = 2T (⌊n/5⌋) + nρ(n) where ρ(n) is the number of 1’s in the binary representation of n. Solution: We have a = 2, b = 5, and nlogba = nlog52 ≈ n0.43. Now, f(n) = nρ(n) ≥ n. Hence f(n) ∈ Ω(nlogb a+ε) for ε = 0.5. This means that the only possible case of the Master theorem that might apply is case 3. Let us now verify the regularity condition. Consider a value of n that is a power of 2 (n = 2t). Clearly ρ(n) = 1. What is ρ(⌊n/5⌋)? Observing that 1/5 in binary is 0.00110011001100110011 . . ., we can see that ρ(n/5) ≈ t/2. So af (⌊n/b⌋) = 2f (⌊n/5⌋) ≈ 2(n/5)t/2 = (n/5) log n. Clearly this is not less than δf(n) = δnρ(n) = δn for any constant δ < 1. Hence the regularity condition does not hold, which means that we can not apply case 3 of the Master theorem. Since this was the only case that might have applied, we can not apply the Master theorem. 3 Solving a recurrence relation Using a method of your choice, give a tight upper bound on the solution to the following recurrence relation: T(n) = 􏰍3T(⌊4n/9⌋) + 3T(⌊n/9⌋) + n1.5 if n ≥ 9 1 if n ≤ 8 Solution: We use a recursion tree. I have drawn the first three levels of the tree, showing only the children of two of the level-2 nodes on level 3. = 3 The root does n1.5 work. The sum of the work on level 2 is 3 · (8/27)n1.5 + 3 · (1/27)n1.5 = n1.5. The sum of the work on level 3 is similarly n1.5 (one way to look at this is to notice that the sum of the work done by the children of a node N is exactly the same as the work done by N). The tree has log9/4 n levels in total, which gives us an O(n1.5 log n) upper bound on T (n). It turns out that this upper bound is tight (although you were not asked to prove that) because each of the first log9 n levels does exactly n1.5 work. 4 Dividing and Conquering the New York Stock Exchange A businessperson is considering buying shares in companies traded on the New York Stock Exchange and then selling those shares within a few days. This businessperson knows, for each company 􏰌 the average profit one is likely to realize by buying/selling their shares; 􏰌 how safe this specific investment is. Each company can thus be represented by a point in the plane: the x coordinate represents the likely profit, and the y coordinate is the risk associated with it (where a higher y coordinate corresponds to a safer investment). Given two such points, we will say that the point p = (p.x, p.y) corresponding to one investment is strictly worse than the point q = (q.x, q.y) corresponding to another investment if p.x ≤ q.x and p.y ≤ q.y, and either p.x < q.x or p.y < q.y (or both). That is, q lies to the right of and/or above p, as illustrated in the picture on the left. Clearly, if p is strictly worse than q, then p is of no interest because q is both less risky and has a higher average profit than p. Hence, to help you reach a decision, you only need to worry about locations that correspond to maximal points: those that are not strictly worse than any other point. For instance, in the picture on the right, the points p3, p4 are p6 are maximal, as you can see from the fact that their upper-right quadrants are empty. 5 p􏰎 1 3 p􏰎 p􏰎4 􏰎 p6 p􏰎 p􏰎 2 p􏰎 7 q􏰎 p􏰎 1. [3 marks] Describe an algorithm that takes as input a set P of points, and a point q, and returns all points of P that are strictly worse than q. 4 Solution: All that we need to do is check each point of P one at a time. Algorithm StrictlyWorse(P, q) S←∅ for each point p of P do if(p.x ∆p) OR (p.y − S[i].y > ∆r) then
// We’ve gone too far. Choose preceding point to cover the old p. Add S[i − 1] to S′.
Let p = S[i − 1].
Let mode=P_COVERED
i − − end if
else
⊲ Decrement i so we’ll reconsider S[i] on next iteration.
// In this case, p is the rightmost point in S′.
// We are trying to skip over any more points covered by p. if (S[i].x − p.x > ∆p) OR (p.y − S[i].y > ∆r) then
// We’ve found first point not covered by old p. Let p = S[i].
Let mode=P_UNCOVERED
end if end if
i++ end while
It’s almost obvious that the runtime is O(n) since the while looop goes from 2 to n, and the work inside the loop is O(1). However, in the way I coded things above, it’s possible to decrement i before incrementing it, which lets i stay the same on some iterations. The key to solving this is to note that whenever i gets decremented, we set the mode to P_COVERED, which forces i to be incremented on the next iteration. So, we can go around the loop at most 2n times, which is still O(n).
2. [5 marks] Prove that your algorithm computes an optimal solution. Your proof should be in the form of a “greedy stays ahead” or an “exchange argument” proof, as described in the textbook. (Hint (for both parts): Think about the leftmost point first.)
Solution: We’ll roughly follow the style of the “greedy stays ahead” proof in Section 4.1 of the textbook.
The main geometric insight needed for this question is to realize that because the points in S are the maximal points as defined in Question 4, if we arrange them in order of increasing x coordinates, then they are also in order of decreasing y coordinates. Furthermore, no two points will have the same x or y coordinates — if you look at the points in S in order, the x coordinates are strictly increasing, and the y coordinates are strictly decreasing.
There are actually two parts to proving “that your algorithm computes an optimal solution”: (1) proving that the algorithm computes a correct solution (one that covers all of S), and (2) proving that the solution is optimal.
Part (1) is easy to overlook, and we will not penalize you if you forgot to do this. At a high level, it’s very easy: the algorithm goes through all the points from left-to-right and makes sure that no point is left uncovered, so nothing is left uncovered. QED. (To actually prove the pseudocode correct, though, would be a lot more work, and I (Alan) actually wouldn’t be surprised if I had a bug in the pseudocode somewhere (e.g., if I had forgotten the i– on line 15, then it’s possible I could have missed a point).
7

Fortunately, this is an algorithms class, so the high-level “proof” is good enough.)
Part (2), proving optimality, is the part that we emphasize. For our “greedy stays ahead” proof, we will prove that each time the greedy algorithm adds a point to S′, it will have covered at least as many points as any optimum algorithm could with the same number of points (counted from the left).
Formally, let Sg′ be the solution computed by the greedy algorithm, let Sg′ [i] be the ith point (in left- to-right, increasing x-coordinate order) added to Sg′ , and let Sg′ [1..i] be the first i points in Sg′ . Let So′ , So′ [i], and So′ [1..i] be the corresponding notations for an optimum solution. We will prove by induction that for all i, the set of points covered by Sg′ [1..i] is a superset (not necessarily a proper superset) of the set of points covered by So′ [1..i]. This will imply that the optimal solution can’t possibly cover all of S with fewer points in So′ than the greedy solution has in Sg′ .
Base Case: When i = 0, neither algorithm has covered any of S, so the set of points covered by Sg′ [1..i] is a superset (in fact, it’s the same set) as the set of points covered by So′ [1..i] (which is the empty set).
Inductive Case:
Assume the inductive hypothesis holds for Sg′ [1..i] and So′ [1..i]. Let pg be the first (leftmost) point in S that is not covered by Sg′ [1..i], and let po be the first (leftmost) point in S that is not covered by So′ [1..i]. By the inductive hypothesis, we know that po.x ≤ pg.x and po.y ≥ pg.y, i.e., that po can’t be “further along” than pg (but they might be the same point).
Note that if you pick any point in S whose x coordinate is in the range [pg.x,pg.x+∆p] and whose y coordinate is in the range [pg.y,pg.y−∆r], then you cover pg, as well as any other points that might lie in that rectangle (and possibly additional points in S with larger x coordinates and smaller y coordinates). Furthermore, if you pick a point with x coordinate greater than pg.x + ∆p or y coordinate less than pg .y − ∆r , that point can’t cover pg , as it’s too far away. The greedy algorithm will choose the point in S that is as far to the right as possible (largest x coordinate, and therefore, smallest y coordinate) that is still in this rectangle. Call this point qg, and this point is also Sg′ [i + 1].
Similarly, the optimal solution needs some point So′ [i + 1] to cover po, and this point must have x coordinate in the range [po.x,po.x+∆p] and y coordinate in the range [po.y,po.y−∆r]. (We’ve assumed that So′ is sorted in order of increasing x coordinates.) For convenience, we’ll refer to the point So′ [i + 1] also as qo. Recall that we know that po.x ≤ pg.x and po.y ≥ pg.y, so po.x + ∆p ≤ pg.x + ∆p and po.y − ∆r ≥ pg.y − ∆r. Since we’ve chosen qg to be the point with the highest x coordinate (and therefore lowest y coordinate) that fits in a rectangle with a higher x bound and lower y bound, we have that qo.x ≤ qg.x and qo.y ≥ qg.y, i.e., the point qg is “at least as far along” in S than qo is. That means that the set of additional points in S that are covered by qg extends at least as far to the right as the set of additional points in S that are covered by qo, and therefore, the set of points covered by Sg′ [1..i + 1] is a superset (not necessarily a proper superset) of the set of points covered by So′ [1..i + 1]. This completes the proof.
8