程序代写代做 algorithm go CSE 3101 Design and Analysis of Algorithms

CSE 3101 Design and Analysis of Algorithms
Solutions for Practice Test for Unit 4 Greedy Algorithms
Jeff Edmonds
1. Integer-Knapsack Problem:
Instances: An instance consists of ⟨V, ⟨v1, p1⟩ , . . . , ⟨vn, pn⟩⟩. Here, V is the total volume of the knap-
sack. There are n objects in a store. The volume of the ith object is vi, and its price is pi.
Solutions: A solution is a subset S ⊆ [1..n] of the objects that fit into the knapsack, i.e., 􏰀i∈S vi ≤ V .
Measure Of Succe􏰀ss: The cost (or success) of a solution S is the total value of what is put in the knapsack, i.e., i∈S pi.
Goal: Given a set of objects and the size of the knapsack, the goal is fill the knapsack with the greatest possible total price.
Failed Greedy Algorithms: Three obvious greedy algorithms take first the most valuable object, maxi pi, the smallest object, mini vi, or the object with the highest value per volume, maxi pi . However, they
vi
do not work. For each of this greedy algorithms, give an instance on which it does not work.
• Answer: Consider the counter example ⟨8, ⟨5, 5⟩ , ⟨4, 3⟩ , ⟨4, 3⟩ , ⟨1, 0⟩⟩. If one takes the highest value or highest density, then one takes the first object. If one takes the smallest volume, then one takes the last object. The only optimal solution contains the middle two objects.
2. We proved that the greedy algorithm does not work for the making change problem when the denom- inations of the coins are 4, 3, and 1 cent, but it does work when the denominations are 25, 10, 5, and 1. Does it work when the denominations are 25, 10, and 1, with no nickels?
• Answer: The greedy algorithm does not work for the making change problem, when the denom- inations of the coins are 25, 10, and 1, with no nickels. Suppose the amount is 30. The greedy algorithm will take on quarter and five pennies. However, the optimal has only three dimes.
3. Greedy Algorithms: A man is standing on the bank of a river that he must cross by jumping on stepping stones which are spread in a line across the river. Though he can’t jump far, he wants to jump on as few stones as possible. An instance to the problem specifies ⟨d0,d1,d2,…,dn⟩, where d0 = 0 is the man’s initial location, di for i ∈ [1, .., n − 1] is a real number giving the distance in meters that the ith stone is from him, and dn is the distance to the opposite shore. Assume that the stones are in order, i.e. 0 = d0 ≤ d1 ≤ d2 ≤ … ≤ dn. Also assume that the stones are not more than one meter apart, i.e. ∀i di+1 − di ≤ 1.
A solution is a sequence ⟨i0, i1, i2, . . . il⟩ of the indexes of the stones indicating that at time t the man jumps onto the itth stone. Such a solution is valid if at time t = 0 the man is in fact on the close shore, i.e. i0 = 0, he finishes on the far shore, i.e. il = n, and at each time step t′ the distance he jumps is at most a distance of one meter, i.e. dit′ − dit′−1 ≤ 1. The cost of a solution is the number l of stones jumped onto.
Note that the assumption on the input ensures that a valid solution always exists.
Specify a greedy algorithm for finding an optimal solution for this problem. Imagine that you are crossing a river on stones. How would you choose which stone to jump on next.
As done in the steps, prove that your algorithm always returns an optimal solution.
Warning: To ensure that a solution is valid, you really must check the distance jumped at each time step t′.
• Answer:

The Greedy Choice: Each iteration the algorithm grabs the object which according to some simple criteria, seems to be the “best” (or “worst”) from amongst the unconsidered objects in the instance. Suppose that the current time is t − 1 and the last stone jumped on is the it−1th stone. In a greedy way, have the man jump on the stone that is as far from it−1 as he can jump, i.e. the largest it for which dit − dit−1 ≤ 1. This is an adaptive decision, i.e. it does depend on what objects have been seen already.
You can think of any stone that is before stone it−1 as having greedy criteria weight −∞ because we don’t want to jump backwards and any stone that is more than one meter ahead of stone it−1 also has the greedy criteria weight −∞ because we don’t want to land in the water and each other stone j has greedy criteria dj − dit−1 . Then you let it be the stone j that maximizes this greedy criteria. The algorithm then makes an irrevocable decision about this object. Assuming such a stone exists, the algorithm has the man jump on it. Otherwise, the algorithm stops and states that it is impossible.
The Loop Invariant: We have not gone wrong. If there is a solution, then there is at least one optimal solution St extending the choices At made so far by the algorithm.
Initially (⟨pre⟩ → ⟨LI⟩): Initially no choices have been made and hence all optimal solutions are consistent with these choices.
Maintaining the Loop Invariant (⟨LIt−1⟩ & not ⟨exit⟩ & codeloop → ⟨LIt⟩): Consider an arbitrary iteration.
Fly in From Mars: The algorithm has iterated a few times. Assume that it is time t−1 and that the algorithm has already decided for the man to step on the stones indexed by ⟨i0, i1, i2, . . . , it−1⟩. All that we know is that the loop invariant is true and the exit condition is not.
St−1: The loop invariant states that there is at least one optimal solution extends the choices At−1 made by the algorithm before this iteration. Let St−1 = ⟨i0, i1, i2, . . . , it−1, jt, jt+1, jt+2, . . . , jl⟩ denote one such solution. (Note that this is a full solution and as such specifies a decision about each object in the instance and that the first t stones do agree with what the algorithm has done.) I like to have a fairy god mother hold it.
Taking a Step: During the iteration, the algorithm proceeds to choose the “best” object from amongst those not considered so far and makes an irrevocable decision about it. In a greedy way, the algorithm just had the man jump on the stone that is as far from it−1 as he can jump, i.e. the largest it for which dit − dit−1 ≤ 1. There is only this case, because we are assuming that such a stone exists.
Instructions for Modifying St−1: The prover says to the fairy god mother, “Fairy god mother, I know that you have your full sequence of stones across the river planned out and that like the algorithm, at time t−1, you are standing on the it−1th stone. If from there you jump to the same stone as the algorithm did, i.e. jt = it, then no changes need to be made to your solution, i.e. St = St−1.” Otherwise, because the algorithm stepped as far as possible, she must have stepped closer, i.e. her stone jt must be between the stones it−1 and it. (If she jumped on more than this one stone between it−1 and it then her solution could not have been optimal.) Tell her to jump from stone it−1 not to jt but to go farther to it. From it, she should continue on as she did before, i.e. St =St−1 −jt +it =⟨i0,i1,i2,…,it−1,it,jt+1,jt+2,…,jl⟩.
ProvingSt is an Optimal Solution: By the loop invariant, St−1 is one of the valid solutions for this instance with minimum number of stones. We must prove that going from St−1 to St does not increase the number of stones. We added one stone so we must make sure that we also delete at least one. To prove this it is sufficient to prove that the next stone jt that she jumps on after it−1 is
2

to the left of stone it. We know that the algorithm chose to jump from stone it−1 to stone it because it is as the farthest from the initial shore that can be jumped on from it−1. We also know that the fairy god mother legally jumps from it−1 to jt. We conclude that jt is either the same as stone it or is closer to stone it−1 than it. It follows that at least one stone jt is included in the sequence deleted. Hence St is at least as good as St−1. Hence, St is also an optimal solution.
ProvingSt is a Valid Solution: By the loop invariant St−1 is a valid solution. (i.e. the initial stone is on the initial shore, the final stone is on the fi- nal shore, and each jump has distance at most one.) The prover made sure that his modifications did not make it invalid. We have St = ⟨i0,i1,i2,…,it−1,it,jt+1,jt+2,…,jl⟩. We know all the jumps from it′ to it′+1 for t′ ∈ [0, t − 2], have distance at most one because both the algorithm and the fairy god mother in St−1 have assured it. We know the jump from it−1 to it has distance at most one because the algorithm chose it this way. We know the jump from it to jt+1 has distance at most the distance from jt to jt+1 because we proved that jt is to the left of it, and we know that this distance is at most one because the fairy god mother manages to jump this in St−1. Finally, we know all the jumps from jt′ to jt′+1 for t′ ≥ t + r have distance at most one because the fairy god mother in St−1 has assured it. In conclusion, St is a valid solution.
Proving St Extends At: By the loop invariant St−1 extends all the decisions ⟨i0, i1, i2, . . . , it−1⟩ made by algorithm before this iteration. The prover made sure that his modifications did not change any of these decisions and changed St−1’s decision jt about the latest object to be consistent with what the algorithm did, i.e. it+1. Hence, St is consistent with both the earlier decisions made by algorithm and this most recent decision, i.e. extends At.
→ ⟨LIt⟩: Because St witnesses the fact that there is at least one valid optimal solution that extends At,, we know that the loop invariant has been maintained.
Exiting Loop (⟨LI⟩ & ⟨exit⟩ → ⟨post⟩): By the exit condition the algorithm has made a decision about every object in the instance. Hence, the algorithm has a full solution Aexit. By the loop invariant, this extends an optimal valid solution Sexit. Hence, the algorithm must have an optimal valid solution.
4. Job/event scheduling problem with multiple rooms:
Review the job/event scheduling problem from Section 16.2.1. This problem is the same except you have r rooms/processors within which to schedule that the set of jobs/events. An instance is ⟨r, ⟨s1, f1⟩ , ⟨s2, f2⟩ , . . . , ⟨sn, fn⟩⟩, where as before 0 ≤ si ≤ fi are the starting and finishing times for the ith event. But now the input also specifies the number of rooms r. A solution for an instance is a schedule S = ⟨S1, . . . , Sr⟩ for each of the rooms. Each of these consists of a subset Sj ⊆ [1..n] of the events that don’t conflict by overlapping in time. The success of a solution S is the number of events scheduled, that is, | ∪j∈[r] S|. Consider the following four algorithms:
(a) Find the greedy solution for the first room. Then find the greedy solution for the second from the remaining events. Then the third room. And so on.
(b) It starts by sorting the events by their finishing times, just like in the one-room case. Then, it looks at each event in turn, scheduling it, if possible. If it can be scheduled in more than one room, we assign it in the first room at which it fits. I.e./ first try room one, then room two, and so on until it fits. If it cannot be scheduled in any room, then it is not scheduled.
(c) Same except, the next event is scheduled in the room with the latest last-scheduled finishing time. For example, suppose the last event scheduled in rooms 1, 2, and 3 finishes at times 10, 15, and 18, and the next event starts at time 17. Then the next event could be scheduled into either room 1 or 2 but not in 3. This algorithm would schedule it in room 2. Note this is the room that
3

minimizes the gap between starting time of the new job and the finishing time of the previously scheduledjobfortheroom. Hereagapofsize17−15=2isbetterthanoneofsize17−10=7 or of size 17 − 18 = −1.
(d) Same except, the next event is scheduled in the room with the earliest last-scheduled finishing time. Note this is the room that maximizes the said mentioned gap.
Prove that three of these algorithms do not lead to an optimal schedule and that the remaining one does.
• Answer: Algorithms (a), (b), and (d) are sub-optimal for the following counterexample instance. room 1
a)
room 2
b)
Fig a gives the events in the instance and the optimal schedule in two rooms. Fig b gives the suboptimal schedule produced by these three algorithms. Note that the third algorithm, which schedules the next event in the schedulable room with the latest last-scheduled finishing time, gives the optimal schedule. We will now prove that it always gives an optimal solution.
Specifications: The objects are the events. For each, we must either choose a room for it to be scheduled in or choose not to schedule it. Such a sequence of decisions makes a valid solution if no two events scheduled in the same room overlap. The measure according to which the solution is optimal is the total number of events scheduled.
The Greedy Choice: Each iteration the algorithm grabs the object which according to some simple criteria, seems to be the “best” (or “worst”) from amongst the unconsidered objects in the instance. This algorithm considers the events in the fixed order sorted by their finishing times. Lets denote this next event by Objt or by more simply by i. The algorithm must make an irrevocable decision about which room it will be scheduled into. When considering the next event, the algorithm considers the set of rooms in which this event can be scheduled in. Of those, it is scheduled in the room with the latest last-scheduled finishing time. If there are no such rooms, then the event is not scheduled.
The Loop Invariant: We have not gone wrong. If there is a solution, then there is at least one optimal solution St that extends the choices At made so far by the algorithm.
Initially (⟨pre⟩ → ⟨LI⟩): Initially no choices have been made and hence all optimal solutions extend choices A0.
Maintaining the Loop Invariant (⟨LIt−1⟩ & not ⟨exit⟩ & codeloop → ⟨LIt⟩): Consider an arbitrary iteration.
St−1: The loop invariant states that there is at least one optimal solution that extends the choices At−1 made by the algorithm before this iteration. Let St−1 denote one such solution. (Note that this is a full solution and as such specifies a decision about each object in the instance.) If you like have a fairy god mother hold it.
Second Case: If our greedy algorithm did not schedule the next event i, then this event must conflict in each room with a previously scheduled event. Hence, St−1 cannot have this next event i scheduled either because it too has scheduled these previous events. Hence, St−1 itself is already consistent this with the most recent choice.
Taking a Step: Now assume that our greedy algorithm scheduled the next event i in room j.
Instructions for Modifying St−1: There are three cases. If both our greedy algorithm and St−1 scheduled the next event i in room j then no changes need to be made and we are done.
If our greedy algorithm scheduled the next event i in room j and St−1 does not schedule this event at all, then we modifying the schedule St−1 into St by adding i to room j and removing any events from j that conflict with it. Just as done with the one room scheduling algorithm in Section 16.2.1, we can prove that only one event is removed and hence St is valid, is optimal, and extends At.
4

The remaining case occurs when our greedy algorithm scheduled the next event i in room j and St−1 schedules it in room j′. (See fig a.)
At−1 Obj = i b) A a)St t
t−1 room j’ St room j’ room j room j
optS
t
We modify the schedule St−1 into St as follows. (See fig b.) We cannot move the events in At−1 whose schedule has already been committed to by the algorithm because St must continue to extend these choices. (See the events in the circle.) We need to move event i from room j′ to room j so that it too is consistent with what the algorithm has done. But doing this change may create conflicts, i.e. overlapping events. We fix any newly created overlaps as follows. Put in a rectangle every non-At−1 event scheduled in room j or j′. Note these all have finishing times of event i or later. We swap the room j rectangle events with the room j′ ones.
We now prove that the resulting solution St is valid, extends At, and is optimal. Proving St is a Valid Solution: By the loop invariant St−1 is a valid solution, i.e. events in the same room do not overlap in time. The prover made sure that his modifications did not make events overlap. The changes may have made conflicts (events overlap), but all of these were fixed. Hence, St is a valid solution. More specifically, there are no new overlaps between events in At−1 (circle) because they did not change. There are no new overlaps between the non-At−1 (rectangle) events because they flipped rooms all together. Hence, we only need to worry about the boundary between the At−1 and the non-At−1 events. The non-At−1 events that flipped from room j′ to j don’t now overlap with the At−1 events in room j, because they are led by event i and the algorithm scheduled it there. The non-At−1 events that flipped from room j to j′ don’t now overlap with the At−1 events in room j′, because they did not conflict with those in room j and we know by the algorithm’s choice of room j that the last At−1 event’s finishing time for j is later
than that for room j′.
Proving St Extends Algorithm’s Choices At: By the loop invariant St−1 extends all
the decisions At−1 made by algorithm before this iteration. The prover made sure that his modifications did not change any of these decisions and changed St−1’s decision about the latest event i to be consistent with what the algorithm did. We did not move any events in At−1. We moved event i from room j′ to room j to make St consistent with this most recent choice. Hence, St is consistency both with earlier decisions At−1 made by algorithm and this most recent decision i and hence extends At.
ProvingSt is an Optimal Solution: By the loop invariant St−1 is one of the valid solu- tions for this instance with optimal (minimum or maximum as the case may be) measure of success. The prover made sure that his modifications did not make St worse than St−1. The schedule St has the same number of events as St−1. Hence, St−1 is an optimal solution.
→ ⟨LI′⟩: Because St witnesses the fact that there is at least one optimal solution that extends At, we know that the loop invariant has been maintained.
Exiting Loop (⟨LI⟩ & ⟨exit⟩ → ⟨post⟩): By the exit condition the algorithm has made a decision about every object in the instance. Hence, the algorithm has a full solution. By the loop invariant, this extends an optimal valid solution. Hence, the algorithm must have an optimal valid solution.
5
Obj = i t