程序代写 ECE 374: Spring 2019

CS/ECE 374: Spring 2019
Solution to the Final Version: 1.0
(10 pts.) Short questions and hopefully short answers. No justification is required for your answers.
(5 pts.) Give an asymptotically tight bound for the following recurrence.

Copyright By PowCoder代写 加微信 powcoder

T (n) = O(n log n).
(5 pts.) Describe (in detail) a DFA for the language below. Label the states and/or briefly
explain their meaning.
{w ∈ {0, 1}∗ | w has at least three 1’s and has odd length} .
The easiest solution is to have a counting DFA M1, that counts the number of 1s its see till 3, and a counting DFA M2 that keep track of the even/odd of the input length, and then build the product automata.
Here is a clean drawing of this product automata:
T (n) = 􏰇 T (ni) + O(n) for n > 200, and T (n) = 1 i=1
wheren1 +n2 +···n10 =n,andn/20≤ni ≤(9/10)nforalli. Solution:
for 1 ≤ n ≤ 200,
c0o 1 c1e 1 c2o 1 000000
c0e 1 c1o 1 c2e

(15 pts.) I have a question about NFAs.
2.A. (5 pts.) Recall that an NFA N is specified as (Q, δ, Σ, s, F ) where Q is a finite set of states, Σ is a finite alphabet, s ∈ Q is the start state, F ⊆ Q is the set of accepting states, and δ : Q × (Σ ∪ {ε}) → 2Q is the transition function. Recall that δ∗ extends δ to strings: δ∗(q, w) is the set of states reachable in N from state q on input string w.
In the NFA shown in the figure below what is δ∗(S,0)?
ε B 1ε G 1 0
B 1ε G 0 1
(Q, Σ, δ, s, F )
2.B. (10 pts.)
and an arbitrary string w ∈ Σ∗ of length t, describe an efficient algorithm that computes δ∗(q, w). Express the running time of your algorithm in terms of n, m, t, where n = |Q|, m = 􏰆p∈Q 􏰆b∈Σ∪{ε} |δ(p, b)|, and t = |w|. Note that faster solutions can earn more points. You can assume that |Σ| = O(1).
an arbitrary
You do not need to prove the correctness of your algorithm (no credit for incorrect algorithm). (Hint: Construct the appropriate graph, and do the appropriate things to it.)
We interpret the N as a directed graph G = (V, E). The set of vertices V = Q. For any u, v ∈ Q, such that there c ∈ Σ ∪ {ε}, such that v ∈ δ(u, c), we add the edge u → v to E. For every such created edge, we also build an array, such that given c ∈ Σ ∪ {ε}, we can decide in constant time if v ∈ δ(u,c) – such an edge is c-admissible. Clearly, this preprocessing can be done in O((n + m)|Σ|) = O(n + m) time.
Computing ε-closure. Now, we extend the BFS algorithm, such that given a set of vertices X ⊆ V, we compute all the set of vertices Y ⊃ X that is reachable by ε-transition from X. This for example can be done by comping G, removing all the edges that are not ε-admissible, and creating a new vertex z, connecting z to all the vertices of X. Now, doing BFS in the new graph H starting at z, all the vertices that were visited (ignoring z) are ε-reachable from X. This takes O(n + m) time.
Computing c-transition. Given a set of vertices X, and a character c, scan the graph. For each vertex x ∈ X, scan all the outgoing edges from x. If there exists an edge x → y, such that y ∈ δ(x, c) (which can be determined in constant time by the preprocessing), we add y to the set of vertices Y . Clearly, this can be done in O(n + m) time.

The algorithm. Let w = w1w2 . . . wt. Let Q0 = {q}. In the ith iteration, we compute
the ε-closure of Qi−1, and let Q′i be this set of states/vertices. Next, we compute the wi
transition of Q′, let Q′′ be the resulting set of vertices. Next, compute the ε-closure of Q′′, iii
and let Qi be the resulting set of vertices.
Running time. We invoke three procedures, each one takes O(n + m) time. Overall,
this takes O((n + m)t) time.
3 (15 pts.) MST when there are few weights.
Let G = (V,E) be a connected undirected graph with n vertices and m edges, and with pos- itive edge weights. Here, the edge weights are taken from a small set of k possible weights {w1, w2, . . . , wk} (for simplicity, assume that w1 < w2 < . . . < wk). Describe a linear time algorithm to compute the MST of G for the case that k is a constant. (For partial credit, you can solve the case k = 2.) Provide a short explanation of why your algorithm is correct (no need for a formal proof). Modifying Prim’s algorithm. For edge e of weight wi, we refer to i as the index i(e) of e. Precompute the indices of all edges – this takes O(m log k) time. For each index i, we maintain a queue Qi of all edges that are outgoing from the current spanning tree computed that are of index i. We maintain all the non-empty queues in a heap. This provides us a with a heap that performs each operation in O(log k) time. For example, to insert an edge e of index i into this heap, we check if Qi is empty. If it is, we add the edge ei to it, and add Qi to the heap of queues. Similarly, to do extract-min, we get the min non-empty queue, and delete an edge from it. As such, we can directly implement Prim’s algorithm, an the running time is O((n + m) log k). Modifying Kruskal’s algorithm. Let H be a graph, where some edges are marked as tree edges, and some edges are marked as usable. Let F be the set of tree edges, and let U be the set of usable edges. Here, we assume that the tree edges form a forest. One can modify DFS, so that it uses all the tree edges and only the usable edges – indeed, it first scan the edges and call recursively on the edges marked as tree edges. Then it scans again, and call recursively on all the usable edges. Clearly, the resulting DFS tree, is a spanning forest of the graph formed by the usable and tree edges. The running time of this algorithm is O(n + m). Let BTU(F, U ) be the resulting algorithm. Lets get back to the problem. Scan the edges, and partition them into k sets E1 , . . . , Ek , where Ei contains all the edges of G with weight wi. This can be done in O(n + m log k) time. Let F0 = ∅. In the ith iteration of the algorithm, for i = 1,...,k, the algorithm would compute Fi = BTU(Fi−1,Ei). The output of the algorithm is Fk, which is the desired MST. Correctness. Consider any non-decreasing ordering of the edges of G, where all the edges of weight wj that appear in Fj, appear in the ordering before all other edges of weight wj, It is easy to verify that the output of the algorithm is the MST computed by Kruskal algorithm when executed on this ordering of the edges. Running time. The preprocessing takes O(n + m log k) time, and each iteration takes O(n + m) time. Overall, the running time is O((n + m)k). 4 (15 pts.) Shuffle it. Let w ∈ Σ∗ be a string. A sequence of strings u1,u2,...,uh, where each ui ∈ Σ∗, is a valid split of w ⇐⇒ w = u1u2 ...uh (i.e., w is the concatenation of u1,u2,...,uh). Given a valid split u1, u2, . . . , uh of w, its price is p(w) = 􏰆hi=1 |ui|2. For example, for the string INTRODUCTION, thesplitINT·RODUC·TIONhasprice32 +52 +42 =50. Given two languages T1,T2 ⊆ Σ∗ a string w is a shuffle iff there is a valid split u1,u2,...,uh of w such that u2i−1 ∈ T1 and u2i ∈ T2, for all i (for simplicity, assume that ε ∈ T1 and ε ∈ T2). You are given a subroutine isInT(x, i) which outputs whether the input string x is in Ti or not, for i ∈ {1, 2}. To evaluate the running time of your solution you can assume that each call to isInT takes constant time. Describe an efficient algorithm that given a string w = w1w2 . . . wn, of length n, and access to T1 and T2 via isInT, outputs the minimum l2 price of a shuffle if one exists. Your algorithm should output ∞ if there is no valid shuffle. You will get partial credit for a correct, but slow (but still efficient), algorithm. An exponential time or incorrect algorithm would get no points at all. What is the running time of your algorithm? The easier solution is via a graph construction. First compute the two sets Pα ={(i,j,α)|∀i≤j≤n, andwiwi+1...wj ∈Tα} and α∈{1,2}. This takes O(n2) time using O(n2) calls to isInT. Build a graph G over the set of vertices V = P1∪P2∪{s, t}. Connect s to all vertices (1, j, 1) ∈ P1, setting the weight of the edge to be j2. Similarly, connect all the edges of the form (i, n, 1) ∈ P1 to t and all the edges of the form (i,n,2) ∈ P2 with weight 0. Finally, for every triple i, j, k, connect (i, j, 1) ∈ P1 to (j + 1, k, 2) ∈ P2 if they exist, with an edge (i,j,1) → (j + 1,k,2) of weight 􏰂k − (j + 1) + 1􏰃2. Similarly, connect (i,j,2) ∈ P2 to (j+1,k,2)∈P1 iftheyexist,withandedge(i,j,2)→(j+1,k,1)ofweight􏰂k−(j+1)+1􏰃2. Here, an edge pays for an atomic string, before it enters it. Let G be the resulting graph. Clearly, this graph is a DAG with O(n2) vertices, and O(n3) edges. Clearly, the shortest path in this graph from s to t. This path can be computed in linear time since this is a DAG using topological sort. Another natural and acceptable solution is of course to use dynamic programming. Rubric: (I) −1: Solution runs in O(n3 log n) time. (II) −2: Solution runs in O(n4) time. (III) −2.5: Solution runs in O(n4 log n) time. (IV) −15: Exponential time. 5 (15 pts.) Dumb and dumbbell. For k > 1, a (k, k)-dumbbell is a graph formed by two disjoint complete graphs (cliques), each one on k vertices, plus an edge connecting the two (i.e., its a graph with 2k vertices). See figure for a (7, 7)-dumbbell. The DUMB problem is the following: given an undirected graph G = (V, E) and an integer k, does G contain a (k, k)-dumbbell as a subgraph? Prove that DUMB is NP-Complete.
The problem is clearly in NP. Given an instance G, k, a solution is made out of two lists, each of k vertices. One can readily check in polynomial time that each list corresponds to a clique of size k, and that there is an edge between some pair of vertices in the two lists.
As for the reduction from an NP-Complete problem. Consider an instance G, k of CLIQUE – we can safely assume that k > 4, as we can solve such instances directly in polynomial time. Consider the graph formed by adding a clique K2 to G, and connecting all the vertices of G, to one special vertex v ∈ V (K2). Let H be the resulting graph.
Clearly, if G has a clique K of size k, then (K,K2) form a (k,k)-dumbbell in H.
Similarly, if H has a (k, k)-dumbbell (K′, K′′), then if K′ does not contain v, then all its vertices must be from G, and a clique of size k in G, as desired. The same argument applies if v is not K′′. One of these two events must happen.
Thus, H has a (k, k)-dumbbell ⇐⇒ G has a clique of size k. Clearly, the reduction is polynomial time.
We conclude that DUMB is NP-Complete.

(15 pts.) You are the decider.
Prove (via reduction) that the following language is undecidable.
L = {⟨M⟩ | M is a Turing machine that accepts at least 374 strings}. (You can not use Rice’s Theorem in solving this problem.)
Same old, same old.
For the sake of argument, suppose there is an algorithm Decide374 that correctly decides the
language L. Then we can solve the Halting problem as follows:
DecideHalt(⟨M, w⟩):
Encode the following Turing machine M′:
if Decide374􏰂⟨M′⟩􏰃 return True
return False
run M on input w return True
We prove this reduction correct as follows:
=⇒ Suppose M halts on input w.
Then M′ accepts all strings in Σ∗.
So Decide374 accepts the encoding ⟨M′⟩.
So DecideHalt correctly accepts the encoding ⟨M, w⟩.
⇐= Suppose M does not halt on input w.
Then M′ diverges on every input string x.
In particular, M′ does not accept any string (and especially not 374 of them). So Decide374 rejects the encoding ⟨M′⟩.
So DecideHalt correctly rejects the encoding ⟨M, w⟩.
In both cases, DecideHalt is correct. But that’s impossible, because Halting is undecidable. We conclude that the algorithm Decide374 does not exist.

7 (15 pts.) Yellow street needs bits.
There are n customers living on yellow street in Shampoo-banana. Yellow street is perfectly straight, going from south by southeast to north by northwest. The ith customers lives in distance si meters from the beginning of the street (i.e., you are given n numbers: 0 ≤ s1 < s2 < · · · < sn). A new internet provider Bits4You is planning to connect all of these customers together using wireless network. A base station, which can be placed anywhere along yellow street, can serve all the customers in distance r from it. The input is s1, s2, . . . , sn, r. Describe an efficient algorithm to computes the smallest number of base stations that can serve all the n customers. Namely, every one of the n customers must be in distance ≤ r from some base station that your algorithm decided to build. Incorrect algorithms will earn few, if any points. (Your algorithm output is just the number of base stations – there is no need to output their locations.) Prove the correctness of your algorithm. What is the running time of your algorithm? This is a classical greedy algorithm problem. First, break the points into groups by scanning the locations in increasing order. Starting at s1, let si1 be the first location that is in distance > 2r from s1. Continue scanning, and let si2 to be the first location that is in distance larger than 2r from si1. In the jth iteration, set sij+1 be the first location that is in distance larger than 2r from sij . Let sik be the last such location marked.
This breaks the location into groups
I1 = {s1,…,si1−1} I2 = {si1,…,si2−1}
Ik = 􏰄sik−1,…,sik−1􏰅 Ik+1 = {sik,…,sn}.
For each interval Ij , we place a base station at the location mj = (left(Ij ) + right(Ij ))/2, where left and right are the two extreme locations in the interval Ij.
Running time. Since the number are presorted, the running time is O(n).
Correctness. Let o1 < o2 < ··· < ou be the optimal solution, and assume for the sake of contradiction that u < k + 1. We can safely assume that a client is assigned to its nearest base station, which implies that the optimal solution breaks the clients into groups O1, O2, . . . , Ou, that are sorted from left to right. If o1 < m1, then we can set o1 = m1 – this would not make the solution any worse. Similarly, if o1 > m1, then we can set o1 = m1 (again), since O1 can not contain any client from I2. We now repeat this argument, inductively, moving the locations of the optimal solution to the greedy algorithm locations. Clearly, if u < k + 1, then the clients in Ik+1 can not be served by the new solution (which has the same coverage as the optimal solution), which is a contradiction. 程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com