CS计算机代考程序代写 data structure chain algorithm 1. For the dynamic array problem (see class notes), consider handling insertions and dele- tions. Assume that deletion always involves the rightmost element in the array.

1. For the dynamic array problem (see class notes), consider handling insertions and dele- tions. Assume that deletion always involves the rightmost element in the array.
At any moment, the array storing your data must have a size that is Θ(k), where k is the number of elements currently stored. In other words, you don’t want to be occupying too much empty space. This means that at some point you might have to downsize, i.e., copy all the data into a smaller array.
(a) When should you double and when should you downsize, to get low amortized cost? (b) Explain (informally, if you wish) why the amortized cost is low for your strategy.
Answer:
To downsize, we create a new array of smaller size, and copy the data over. We need to reduce the array size when the size of our remaining data set shrinks too much. Recall that our rule for the insertion-only case was to double the array when it is full. We could simply reverse that rule, and copy our data to an array of half the current size, when it goes from just over half-full to exactly half-full. This would cost O(n) time, where n is the number of elements currently in the array. Unfortunately this is not a good idea, because we could force several linear-time operations in a row, with a sequence of insert- delete-insert-delete-etc, happening just at the critical position where we double or divide. This would not lead to an amortized improvement.
(Answer to a)
Instead, we could make sure that there is a significant difference between the critical “density” at which we double or divide. For instance, keep our rule where we double when we insert into a full array. But divide the array size by 2 when we need to delete from a quarter full array.
(Quick answer to b)
Thus the downsized array will be half full (minus one element). This means that a linear number of inserts will have to happen for the array to double again. In fact the same holds for dividing again. This way, we get constant-time amortized complexity.
Potential method:
As always, we should look for something that changes a lot (and negatively) when we have an expensive operation. Here there are two kinds of expensive operations: doubling, where the array goes from full to twice the size and half full; and downsizing, where the array goes from a quarter full to half the size and thus half full. So what changes a lot? One thing is the number of items that would need to be added or removed in order to get a half-full array: If we have an array of size k that gets doubled to 2k, then beforehand our function returns k/2 and after it returns +1. That’s a large drop in potential, proportional to the cost of copying k items. Just multiply the function by 2 and we’re done. But then what about downsizing? Suppose again that we have an array of size k. Then just before downsizing the potential is k/4 (because we would need to add k/4 items to be half-full). Of course, after downsizing the function returns +1. Once again, we have a large potential difference, proportional to the cost of copying k/4 elements. This potential function can be rephrased as Φ = 2 · |i − Size/2|, where i is the number of elements and Size is the array size. Note that the factor of 2 was needed for doubling, but not for downsizing, but it doesn’t hurt. In the textbook you will find the same calculation expressed as an if-then-else statement, without the factor of 2 for downsizing.

Accounting method:
Our main invariant will be that whenever we need to double or downsize we use up all the money in the bank. We already know how to do this for doubling, by maintaining the invariant that every element beyond the halfway mark has $2. If we ever need to delete an item that’s beyond the halfway mark, just burn the associated $2, because that maintains the invariant. It’s as though the item was never added in the first place. If we keep deleting below the halfway mark, create one dollar to place at that index. That way if we reach the quarter-full mark we have enough dollars to copy everything. On the other hand if we’re below the halfway mark and add an element, we can actually burn the dollar that’s sitting there. In summary the most expensive cheap move is still that of adding an element after we have a half-full array. The expensive moves are essentially paid for via preceding cheap moves.
2. You must support the following two operations on a set of numbers: (i) insertion, and (ii) “large-delete” which is to delete the largest half of the values in the set. In other words if there are k elements you delete the elements with the ⌈k/2⌉ largest values.
Both operations should take constant time, amortized. Assume that you start with an empty set.
(a) What data structure will you use, and how will you handle the operations? What is the worst-case time complexity for one operation?
(b) Show why each operation costs O(1) amortized. You should try both accounting and the potential method.
Answer:
(a) Use a linked list. Insertion takes worst-case O(1) time (let’s just say 1 per element). To delete, run deterministic Selection to find the median, and partition. This locates the largest values that can be discarded. The cost is linear (i.e., if n elements are present when we run large-delete, the cost is dn, for some leading constant d).
Before we analyze formally, let’s get some intuition for the problem. When we delete, the linear cost can be shared by all the elements being deleted. In other words, if we have n elements and delete n/2 of them, the cost per deleted element is dn/(n/2) = 2d. Every element can be deleted at most once, so this is the total price we will ever need to pay for any given element. Sharing the cost over only the deleted elements makes accounting simpler, because surviving elements might be part of several large-deletes.
Note: It is not sufficient to analyze the specific scenario of multiple insertions followed by multiple deletions.
(b) Accounting method: When inserting an element x, instead of 1 we pay 1+2d, where d is the leading constant for deterministic Selection. We use the excess “savings” for eventual deletion of x during some expensive large-delete operation. Every element contributes its own savings that will be spent if it ever needs to be deleted. The intuition for saving 2d instead of d was mentioned above. When we perform the large-delete operation on k elements, each of those elements has 2d savings, so we have 2dk in the bank. The operation costs dk, so we use half of our savings and are left with dk for k/2 elements. In other words we preserve the invariant that our bank always has 2d per surviving element.

Potential method: We look for something in the data structure that changes a lot when-
ever we have an expensive operation. The natural things to pick is the number of elements
in the data structure, although that alone will not work out (try it if you haven’t). The
order of magnitude is ok (linear), but this won’t create a low enough amortized cost for
large-delete. So we let Φ be the number of elements in the list, times 2d. This is clearly
always non-negative, and starts out as zero. The actual cost ci of inserting an element is
1 but is amortized to cˆ = 1 + 2d (still constant). Deletion of n/2 elements creates a huge i
negative potential difference that’s enough to pay for median-finding. Specifically, the cost of deleting n/2 elements is ci = dn. The change in potential is −n2 · 2d = dn, so the amortized cost of deletion is zero. Thus the sum of amortized costs over n operations is at most the amortized cost of n insertions, which is (2d+1)n = O(n). Thus the amortized cost per operation is constant, and so is the average true cost.
3. You are given k cards, arranged in k columns, each containing one card. A “move” is defined as choosing a column and redistributing all of its cards into other columns, but when doing this each of the columns can receive at most one card. Also, during the move, you are allowed to make new columns (in fact this is unavoidable if you don’t have enough existing columns in which to place all the cards), but again only one card can go in each new column.
The reward of one move is precisely the number of cards in the column that you choose to redistribute. Suppose that you get to make some huge number of moves (say, n, far greater than k).
(a) Develop a strategy to maximize your average reward per move (equivalent to maxi- mizing total reward over n moves). Express this as a function of k, using Θ-notation. In other words your maximization doesn’t have to be entirely precise; you may assume that k is any convenient number that will make the math easier for your strategy, but you cannot assume that k = O(1). Notice that any strategy that you come up with provides a lower bound on reward optimality. The better the strategy, the better (higher) the lower bound. It’s trivial to get Ω(1) per move, so you must get ω(1).
(b) Now it’s time to figure out how good your strategy actually is. We will get an upper bound on the optimal reward for this game: Use amortization (specifically, the potential method) to obtain an upper bound on the best possible average reward. Ideally this upper bound should match what you get in part (a), using Θ-notation. If it doesn’t, then either your algorithm isn’t optimal, or your analysis in part (b) isn’t tight enough. If it helps to think in terms of cost instead of reward, pretend that your friend is playing this game and it will cost you because you must pay their reward. In this context, you want to find an upper bound on the total cost of your friend’s n moves, without having the slightest idea of what their strategy is. In other words you will get an upper bound on what you will pay your friend, before your friend even starts to think about their strategy.
For amortization, as usual, you want to take advantage of the fact that “expensive” moves do not happen that often. Your Φ should create a ∆Φ that offsets expensive moves, making their amortized cost relatively smaller. So, first think about defining what an expensive move is, and what a non-expensive move is. It will help to have a good strategy in part (a) to determine what the cutoff should be. Then think about something that changes a lot in the configuration of columns whenever you have an expensive move, and use that

as your ∆Φ, then reverse-engineer Φ. Always remember that when you evaluate Φi at any given time, you are taking a snapshot of the current state of the world. Φ cannot be a function of “what changed” between two iterations. That is the job of ∆Φ.
(c) Instead of the potential method, use the accounting method.
Answer:

(a) Here is a strategy that provides a reward of Θ( k) per move. First, take as much √√√√
(finite) time as you like, to build k columns, of height k, k−1, k−2,. . ., 1. This can easily be done in O(k) moves. You may have plenty of cards leftover, but just ignore them.
Then for all subsequent moves, distribute the column of height k to all the other ones that you built, and to one empty position. This simply restores the initial configuration.
k per move, after the initial phase. Even though this strategy might use as few as half of the cards, a similar arrangement using all of the cards would not change the reward asymptotically.
Here is another strategy with the same reward. Take some finite time to create x columns, √
each of size x. Make sure that x = Θ( k), which is easy to do. We can ignore the
remaining k − x2 cards. Suppose that the x columns are in “positions” 1, . . . , x. Then
perform x moves, each of which takes one of the columns and spreads it out into positions
x+1, . . . , 2x. This recreates a block of x columns of size x, so we can repeat. The reward √
The reward is

permoveisx=Θ( k).

So, now we know that the best possible reward per move is Ω( k).
It is more interesting to figure out why any strategy, the best strategy ever to be invented,
will yield a reward of O( k) per move. That is answered in (b) and (c).
(b) Rephrasing reward as cost, we want to show that any sequence of n moves will cost
at most O(n k), even though we know that in the “worst” case a single move could cost k. In other words, a naive worst-case analysis would say that n moves will cost O(nk), but an amortized analysis can provide a better bound.
As always, the key is to define what an “expensive” operation is. Given that we already
have a target cost, that should be used to define “expensive” and “cheap” operations.
So we need a Φ that will “react” a lot whenever we have an operation that exceeds our
target by a lot, i.e., whenever we move a column of size ω( k). We want ∆Φ to cancel √
out expensive operations, but we are not bothered if it leaves a leftover of O( k).



Consider a potential function Φ that counts the number of cards above height
because we can place at most one card in each large column. Whenever we move a non-large column, the true cost c is less than

k, in k columns can contribute to this count. Call these
all columns. Notice that at most
columns “large”. When we make a move, we can increase the potential by at most
√ √√
can increase by at most k. Thus, by definition the amortized cost cˆ is c + ∆Φ = O( k).
Also, if we ever move a large column, the true cost will be x +
√ √√
above height k. However the amortized cost will still be at most k, because at least
x cards will have to move to non-large columns. In other words, there is definitely a
contribution of −x to ∆Φ, and there might be positive contribution of at most √√√√
socˆ=c+∆Φ=(x+ k)+∆Φ≤(x+ k)−x+ k=O( k).

k to ∆Φ,



k, k, and the potential
k, where x is the excess

We also claim that the sum of amortized costs is no smaller than the sum of true costs. ThisistrueifΦ −Φ ≥0,whichinthiscaseholdsbecauseΦ ≥0andΦ =0. Inother
n0 n√0
words, we can’t have a non-positive number of cards above height k, and we start with a configuration that has no cards above.
(c) Accounting method: Every time we move a column, pay double for each card that
moves from below to above the k line. Then our invariant is that we store a “dollar” in the bank for every card above the line. It is best to visualize one dollar attached to every

k. The reasoning is similar to that in part (b). Briefly, there are at most k columns of height
card above the line. Whatever move we make, our extra cost will be at most
at least above.

k. So only those columns can possibly receive a card that moved from below to
Every time we move a large column, every card that is above and moves below gets paid
for by the dollar that was stored at its position. If a card is moved but remains above the
line, it brings its dollar along with it. Thus when we move a large column we only need
to pay for the cards that are below the line, everything else is prepaid. As mentioned
we might also need an extra k to create new dollars in case cards move from below to
above the line.
When we move a short column the true cost is under


under k.
So the conclusion is that the amortized cost of every move is at most 2 k.
Note: the answer to (c) is short, but it seems to me that the methodology of the potential
method gives more hints about how to amortize. In other words, looking at the configura-
tion and figuring out what changes when we move “large” columns seems more intuitive to
me than somehow realizing that we should store a dollar only for certain cards. But this
is subjective: with the accounting method, we also create a threshold that we’re happy
to pay, so somehow everything above would need to be paid for already. That essentially
k, and since we don’t know which column will be moved, we might as well be prepared to move
tells us that we should have dollars available to pay for everything in excess of every column.
Note 2: some popular potential functions that don’t work: number of columns, number of empty columns (which would require stating that a maximum number of columns exists), size of tallest column, average height of columns.
For each of these there exist configurations where a large number of cards is moved yet there is no significant potential difference.

k and the extra amount is also √
√ √

4. You are given a text T of n characters, and your goal is to find the length of a subsequence in T that is as long as possible and has the property that it reads the same forward and backward. For example, in XABCBBACXA, we can find XAB…BBA…X… to form the subsequence XABBBAX of length 7.
(a) Write a recursive formula to solve this, without getting LCS involved.
(b) Describe your dynamic programming table, and how it gets filled bottom-up. (where are the base cases, where is the answer, how do you calculate each item in the table?) (c) What is the time and space complexity?
Explanation: What we’re looking for is called a longest palindromic subsequence (LPS). By definition the LPS is a subset of T, such that we obtain a set of nested pairs of characters: The first and last character of the LPS must match, and if there is more to the LPS then all remaining matches must be located in between the first pair, etc. At the very middle we might have a single character that is not paired.
Answer:
We can find the length, LPS[i, j], of the LPS between indices i and j as follows. For base cases when i = j, the answer is trivially 1. For base cases when j = i+1, the answer is trivially 1 or 2, depending on whether there is a match. In general, LPS[i,j] = 2+ LPS[i+1, j−1] if T[i] = T [j]. Otherwise, it is the max of LPS[i+1, j] and LPS[i, j−1].
Our recursive formulation has two parameters, i and j. So there are O(n2) distinct subproblems to deal with, and they can be stored in a 2D dynamic programming table. Specifically, at cell (i, j) we store LPS[i, j]. This is just a length, not an entire LPS which would be expensive to store. In each row, j increases to the right. In each column, i increases downward. The base cases are located along two successive diagonals; all cells such that i = j, or j = i+1. The length of the LPS of T is at the top-right corner, where i = 1 and j = n. Each cell depends on three neighbors (and two characters in the string), and is computed in constant time. Overall the time complexity is O(n2), but if we use two arrays to mimic two diagonals of the table, we get O(n) space with a bottom-up approach. It’s also possible to use two arrays to work row by row or column by column.
Notes:
(i) The proof of correctness is not included here, but is similar in nature to that for LCS. (ii) There is a different solution that relies on LCS but its proof of correctness is not obvious, and it can return incorrect LPS strings if not careful, although it does find the length correctly.
5. You and your friend are presented with a row of n buckets, each containing some amount of gold. Suppose that each amount is represented by some real number. You both decide to play a game, taking turns as follows. First you will keep either the first or the last bucket. Then your friend will keep the first or last of what remains, and so on. This will continue until you have partitioned the buckets.
(a) Provide a recursive formulation that returns the maximum amount of gold that you can acquire. This assumes that both sides are playing as best as possible, each to maximize their own yield.
(b) Analyze the time complexity of finding the maximum in part (a), via plain recursion and then via dynamic programming. What space complexity can you achieve?

Hint: You have two options, and need to take the max. But once you choose either option, you don’t have control of the next move. However you know that in each case your friend will have two options, and will play optimally to leave you with the min. Only after that will you be left with a subproblem that has exactly the same conditions as what you started with.
Answer:
(a) The row of buckets is an array A with n indices, occupied by positive real numbers. Let C(i, j) represent the amount of gold you get if you play first on the subarray A[i . . . j]. You first have to select A[i] or A[j]. Let’s analyze what happens in the former case. After your move, your friend must select A[i+1] or A[j]. Respectively you will be left with the choice of A[i+2] and A[j], or A[i+1] and A[j−1]. In fact what this means is that you will be left with a brand new game, where you will gain either C(i+2,j), or C(i+1,j−1). It is your friend’s job to leave you with the least profitable longterm option. So selecting A[i] means you have a gain of Gi = A[i] + min{C(i+2, j), C(i+1, j−1)}. The min arises because your friend can force you to have whichever of the two subarrays gives you the least gold.
On the other hand if you initially select A[j], then your friend must select A[i] or A[j−1]. Withasimilaranalysis,yourgainGj isA[j]+min{C(i+1,j−1),C(i,j−2)}.
To recap, you choose A[i] or A[j] depending on which of the two maximizes the total gold yield. Thus C(i,j) = max{Gi,Gj} =
max{A[i] + min{C(i+2, j), C(i+1, j−1)}, A[j] + min{C(i+1, j−1), C(i, j−2)}}.
Of course, there will be a base case whenever the subarray has a small constant size. The amount of gold that you can guarantee to collect is C(1,n).
(b) With a recursion tree we can see that C(1,n) will recursively require the solution for three smaller subproblems. In each of these subproblems, the size of the subarray decreases by 2. So the recursion tree will have linear depth, and a branching factor of 3, meaning it has exponential size.
On the other hand, with dynamic programming, we can compute C(i,j) in constant time, because each of the three subproblems will be stored in a table, and computing the required minima and maxima takes constant time. Thus we can build C() for all base cases first, and then gradually for larger subarray sizes. In terms of the dynamic programming table, each row would correspond to a starting index and each column to an ending index of a subarray. Thus we would need to fill the top-right part of the array, above the diagonal from top-left to bottom-right. We would compute the diagonal entries first, and expand toward the top-right corner. The value of any cell depends on the value of the cell that is 2 to the left in the same row, and the cell that is 2 below in the same column, as well as the cell that is diagonally down and to the left. Notice that parity is conserved. If the array is colored like a chess board, then white cells only depend on white cells. This has to do with the size of the initial array. Depending on whether it is even or odd, the base case will be one of two diagonals. When you play on an even array, your next turn will always be on an even array.
Overall we will compute a quadratic number of entries, so we can solve this problem in O(n2) time. In fact we only need to store a linear number of entries in order to compute all entries for subarrays of a given size. So we can find the gold yield using linear space. This is the same little trick as shown for LCS.

6. You have an integer amount, n, of liquid to transport. You’re able to manufacture k different container types. Each type has a unique integer volume. In other words, for 1 ≤ i ≤ k, container type i fits bi units of liquid, where bi is an integer. You also know that one of the container types has unit volume (e.g., b1 = 1).
For each type you can produce as many containers as you like. Every container that you produce costs one dollar, regardless of volume. Every container that you use for transportation must be entirely full.
Show how we can determine the minimum cost (in dollars) to transport all the liquid. What is the time and space complexity of finding the minimum cost?
Answer:
All useful containers have capacity at most n; any larger container would be ineligible because it could not be filled. So we could scan through the list of containers in O(k) time and keep only those that are not too large. From now on let’s assume that we only have container sizes no greater than n, so k ≤ n. Also without loss of generality, b1 = 1.
This problem is similar to the rod cutting problem. Some container type has to be used “first”. Thus we try out all O(k) possible options for this first container, and in each case we deal with the rest of the liquid recursively. Suppose that the first container has some capacity bi. After paying a dollar, we will still have to deal with n − bi units of liquid. Let D(x) represent the smallest possible cost to deal with x units of liquid.
Then D(x) = min{1 + D(x − bi)}
where the min is taken over all i such that x ≥ bi. In other words, for every eligible container size (i.e., eligible value of i), we are considering the option of paying 1 and dealing with the remainder. In the worst case, we take the min of k options because there are k container types. If we do this recursively, we will get an exponential time complexity. Notice that if bi = x then we just pay 1 for one container of that size.
Construct an array that can hold the values D(x), for 1 ≤ x ≤ n. We will fill in the array incrementally, so that when we need to compute D(x), we evaluate the above recursive equation in O(k) time. It only takes O(k) time (rather than O(x) which is the size of the part of the table that is filled in so far), because we just need to skip back to look up O(k) positions in the array. The required indices become available by subtracting eligible container sizes from the current x. Thus the overall time is O(nk), and we use O(n+k) space; n for the dynamic programming array, and k for our list of container sizes.
7. Suppose that you have two strings, S1,S2, each consisting of characters from some small alphabet. The lengths of the strings are k and n respectively. You can assume that n ≥ k but not that k is necessarily some small value.
If we alternate reading characters from one string and then the other, we create a “blend” of the two strings. For example if
S1 = DY NOGRASOAMA and S2 = AMICPRMMINGISZING we can create this blend: DYNAMICPROGRAMMINGISSOAMAZING
Note that the length of a blend is n+k, and there can be many blends.
Now for the problem: Given three strings S1, S2 and C, determine if C is a blend of S1 and S2. If S1 and S2 have size k and n respectively, the time complexity should be O(kn).

You should address the following in your answer:
Formulate the problem recursively, including an informal justification for its correctness. (How can you categorize subproblems that will be helpful to solve first?) How many distinct subproblems are there? Mention what you consider to be a base case. What is the size of the dynamic programming table that you will use, how do you fill it in, and where are the base cases and final answer located?
Answer:
The blend must start with some character of S1 or S2. Similarly it must end with some character from S1 or S2. We arbitrarily use the latter observation. So one solution is to consider the last character in each string, and if it matches the last character of C then recurse by removing it from C and the Si it came from. There is no score to optimize here; we just care about matching or not matching, so all we’re doing is recursing on solutions that continue to match. Clearly this recursive approach takes exponential time: by brute- force, for each subproblem we recurse O(n+k) times (once for every prefix that is left over after chopping off and trying to match the last character), and several subproblems are almost as large as the original (in other words, the depth of the tree is linear).
To reiterate, for each of two choices (try to use last character from S1, or from S2), we are possibly left with a subproblem that involves a prefix of S1 and a prefix of S2, where in fact one of the two prefixes is the same as what we started with. We are making a binary choice, a linear number of times.
However, there are only k prefixes in S1 and n prefixes in S2, so there are kn distinct subproblems that could ever be solved. Each subproblem involves trying to determine if some prefix of C is a blend of some prefix of S1 and some prefix of S2. The good thing is that the size of the prefix of C is known implicitly (it’s just the sum of the two other prefixes), and is not part of the recursive formulation. That’s why we will get quadratic time and not cubic.
Make a table A of size k by n. Rows, numbered from top to bottom, correspond to S1, and columns correspond to S2 from left to right. A[i,j] corresponds to blending a prefix of length i from S1 and a prefix of length j from S2 into a prefix of C (of length i+j). So A[i,j] will be either True or False. At the top-left corner, we have A[0,0], which we won’t need. The useful base cases have form A[0, j] and A[i, 0]. We can compute A[0, j] by incrementing j, starting at j = 1. For example, what does A[0, j] represent? Blending a prefix of size j from S2 with an empty S1 into C. This just asks if S2 = C. So we scan S2 and C and fill in True while they match, and then False afterwards if we notice one mismatch.
Now, any non-base-case A[i,j] relies on the entries directly above and to the left, i.e., A[i−1, j] and A[i, j−1]. As mentioned earlier, that is because at this point we are checking if the (i+j)-th character of C is going to come from S1 or S2 (keeping in mind that we want to do this while trying to blend two specific prefixes of S1 and S2). For A[i,j] to be True, we need C[i+j] to match S1[i] and A[i−1,j] to be True, –or– we need C[i+j] to match S2[j] and A[i,j−1] to be True. In other words we check if the last character from one of the two prefixes will match the last character of the corresponding prefix in C (whose size is the sum of the two other prefixes), and if so then we need to know that the corresponding subproblem has a solution.

Thus in quadratic time we propagate True values through the table, down and to the right. The solution is found at the bottom-right cell.
Note that if at any point we fill a row or column (or any pattern that “blocks off” the top-left from the bottom-right) with False values, we can stop, but this is just a heuristic idea that might save time, whereas it will cost more time to perform such checks so in the worst case it could be a waste of time.
8. Show how to determine whether a directed graph G contains a vertex with in-degree V −1 and out-degree 0, in time O(V ), given an adjacency matrix for G.
Answer:
A vertex with the properties that we’re looking for is known as a universal sink.
For all of the following, we ignore entries in the diagonal of the matrix.
A universal sink must have all 0’s in its row, and all 1’s in its column. Scan the top row of the matrix until a 1 is found. During the scan, we will have confirmed that the vertex corresponding to this row has no outgoing edges. Finding a 1 means that the vertex can’t be a universal sink. But all the 0’s scanned also mean that the vertices in the corresponding columns are missing an outgoing edge, so they can’t be universal sinks either. In other words, as we scan, we spend constant time per removed candidate. If we don’t actually find a 1 in the row, then the vertex of this row is the only candidate for a universal sink, so we would just need to confirm that it has all 1’s in its column. On the other hand, if the horizontal scan finds a 1 at column k, we know that all vertices with smaller indices are eliminated, and that vertex k might be the sink. We can skip down to row k and resume scanning towards the right. Using similar reasoning, either vertex k will be the sole possible universal sink, or it will be eliminated along with all vertices that precede it. So our scan progresses either to the right or down, meaning we spend at most 2V time overall. At the end we will either have eliminated all vertices, or we will have one candidate, and a simple linear check finishes the job.
9. You’ve invited n friends to a party. Your palace has two floors, each of which can fit everyone invited. The problem is, some of your friends don’t like each other, and refuse to be on the same floor. Each person gives you a list of who they like and a list of who they don’t like. How do you set this up as a graph problem and figure out if everyone can come to your party?
Answer:
Let graph G have n vertices (one for every friend) and an edge for every pair of people who don’t both like each other. The graph is simple to build, either with an adjacency matrix or adjacency list, given our input. We need to assign each vertex to one of two groups (floors). No two vertices sharing an edge can end up in the same group. In other words, we must try to “color” the vertices of G with 2 colors, so that no edge has the same color at both its endpoints. Suppose that G is undirected. If it has more than one component then every person in one component is happy to be in the same room as everyone from every other component. So we can treat components independently. Either way these components will be identified via BFS. Perform a BFS starting from any vertex v. Suppose we color v blue. Then all neighbors of v must be colored red. Those are precisely the vertices visited in the first iteration of BFS, so we can consider them all to

be time-stamped with a “1”. In the second iteration, each of these red vertices will explore one level further; every new vertex found must be colored blue (or, time-stamped with a “2”). If any of these red vertices explores an edge leading to a marked (already explored) vertex, check if it is colored red or blue (or, check the parity of its time-stamp). If the colors alternate, we are still ok. Otherwise, not everybody can go to the party. Continue until BFS ends: each iteration has a parity (odd or even). For each vertex in such an iteration, there are two types of edges; those that join to unexplored vertices, and those linking to already-marked vertices. In the former case we are forced to color newly- discovered vertices with the opposite of the current color. In the latter case we verify that the parity (color) is different at both ends. After we explore the entire component that v is in, we can continue with a BFS from any unmarked vertex, to explore a new component.
10. Let G be a graph with red, blue and white vertices. Show how to identify all red vertices that have a path to at least one blue vertex without going through other red vertices. Argue correctness and provide a time complexity for your algorithm.
Sub-optimal answer: This is a search problem. From every red vertex v could search (BFS or DFS) through G to find a vertex that is blue. We know that some edges are not usable for the search from v. Specifically no edge incident to another red vertex is usable. When this search terminates, we could start a new search on some other red vertex. If we don’t maintain any information at each search, then the worst-case search cost for any red vertex can be upper-bounded by the number of edges in its component. By just running a search for every red vertex, the time complexity would be O(E) per search, where E is the number of edges in G. Thus the sum of all searches from all red vertices could be bounded by O(EV ). This time complexity could be reached if there is one component and each red vertex has to search through most of it, whether a blue vertex is found at the end or not. A simple example would be a linear number of red vertices, all linking to one end of a linear-sized chain of white vertices, which may or may not have a blue vertex at the other end. Notice that each new search repeats a lot of the same work.
This idea is somewhat brute force.
Much better answer:
We will mark white nodes, to avoid repeating work. A clear way to do this is to perform the search starting from blue nodes, without going through other blue nodes or red nodes. Now, if a white node is encountered, color it blue to signify that it can reach a blue vertex according to our rules. Then continue the search: in turn the white node that has been converted to blue will lead to a recoloring of other neighboring white nodes. Whenever red nodes are encountered, they are added to an output list (and marked so we don’t duplicate output). When one blue node is fully processed, we start a new search from another blue node. There is no need to perform a brand new search from blue nodes that were initially white. They have already done their search.

If you prefer to search starting from red nodes, the solution is similar but now we must also mark white nodes if they are “useless”. Consider the search from a red node, R, through one of its white neighbors (assuming we don’t have the trivial scenario of no neighbors or a blue neighbor). This particular search from R will progress through white nodes until a blue node is found or until the search backtracks to R. In the former case, all white nodes that were found in this particular search should be colored blue. In the latter case all of these white nodes should be marked as “useless” to signal that no future search should pass through them.
Whichever version we use (starting from red or from blue), each particular search reaches some number of white vertices that have not previously been altered or recolored. The cost is proportional to the number of such vertices plus the number of their incident edges. These vertices won’t be processed by future searches, so all together we get a cost of O(1) per vertex and also per edge. Thus the total time complexity is O(E + V ).