CS代考 EECS 376: Foundations of Computer Science Fall 2020, University of Michigan

EECS 376: Foundations of Computer Science Fall 2020, University of Michigan,
EECS 376 Midterm SOLUTIONS
The multiple-choice portion of the exam consists of 10 randomly selected questions out of the “Multiple Choice”, “All/Some/No”, and “True/False” sections below. The written portion of the exam consists of 4 randomly selected questions out of the “Written Answer” section below. These two portions are released as separate Canvas quizzes. You are to submit the answers to the multiple-choice portion on Canvas. Submit your written solutions to Gradescope as you would for a homework assignment. You will not submit anything on Canvas for the written part.
The multiple-choice Canvas quiz has a time limit of 30 minutes, plus a 10-minute grace period. The expected time to complete the written portion is 40 minutes. However, you may submit any time within the 2-hour window for the exam. Both portions must be submitted before 9pm Eastern Time.

Copyright By PowCoder代写 加微信 powcoder

Logistics:
• The exam will take place on Monday, October 12, 7pm – 9pm Eastern Time.
• You must submit your multiple-choice answers on Canvas prior to 9pm Eastern Time. You will have 30 minutes plus a 10-minute grace period to take the quiz, but it must be submitted before 9pm.
• You must submit your written answers to Gradescope prior to 9pm Eastern Time. There will be no grace period for the written portion. As with the homework, we suggest you start your submission at least 15 minutes before the deadline.
• You may use any static resources for the exam, including the textbooks, lecture slides, Google, etc. You may not use calculators (e.g. a physical calculator, WolframAlpha, etc.), as we consider that to be a dynamic resource.
• You are prohibited from searching for answers to any of the exam questions online.
• You are prohibited from soliciting help from anyone, whether in person, over text/chat, on StackOver-
flow, making public Piazza posts, or any other means.
• Your solutions must be entirely your own work.
• If you have clarification questions, make a private post on Piazza, and a staff member will respond as soon as possible.
• If you run into technical issues or have an emergency, contact the staff right away. Do not contact a fellow student for help.
Any deviation from these rules will constitute an Honor Code violation. In addition, the staff reserves the right not to grade any exam taken in violation of this policy.
Attest to the following honor pledge by signing your name below.
Honor pledge:
I have neither given nor received aid on this exam, nor have I concealed any violations of the Honor Code. I will not discuss the exam with anyone who has not already taken it.
I am taking the exam at the time I was assigned by the staff.
Signature:

Multiple Choice (3 points each)
1. Let f and g be functions mapping integers to integers. It is (⃝ always / sometimes / ⃝ never)
true that g(f(n)) = O(f(g(n))) for functions f and g.
2. For the recurrence T (n) = 2T (n1/4) + O(log n), which of the following is the tightest asymptotic upper bound on T(n)?
(Hint: use the substitution S(n) = T(2n))
⃝ O(nlogn)
⃝ O(nlog2n)
Solution: It is also not always the case that g(f(n)) = O(f(g(n))). For instance suppose we take f(n) = log n and g(n) = n2 then g(f(n)) = (log n)2 and O(f(g(n))) = O(log n2) = O(2 · log n) = O(log n), therefore in this case g(f (n)) ̸= O(f (g(n)))
Likewise, it is not always the case that g(f(n)) ̸= O(f(g(n))). For instance suppose we take f(n) = n and g(n) = n2, then g(f(n)) = n2 and O(f(g(n))) = n2, therefore in this case g(f(n)) = O(f (g(n))).
We also have that
When we let m = logn,
Since 2log n = n, we have that:
T(2m) = O(m).
T (2log n) = O(log n). T (n) = O(log n).
S(n) = T(2n)
= 2T 􏰌(2n)1/4􏰍 + O(log 2n)
= 2T(2n/4) + O(n)
= 2S 􏰌n􏰍 + O(n) 4
S(n) = 2S 􏰌n􏰍 + O(n). 4
By the Master Theorem where k = 2,b = 4,d = 1, S(n) = O(n).
Now we use the fact that S(n) = O(n) and S(n) = T(2n) to get an asymptotic bound on T(n). T(2n) = O(n)

3. A bottom-up dynamic programming algorithm fills the entire table, where the table has dimensions n × m. What can be said about the runtime of the algorithm?
⃝ No conclusions can be drawn from the information given.
4. Consider an algorithm to multiply matrices that takes in two n × n matrices, X and Y , and outputs the resulting n × n matrix, Z. The algorithm performs the multiplication by partitioning each of the matrices X, Y , and Z into four n/2 × n/2 matrices. It makes recursive calls to multiply the n/2 × n/2 submatrices, then adds the values gained by these recursive calls to compute the values for each of the elements in Z. Which of the following is an accurate description of the paradigm used by this algorithm?
Divide and conquer
⃝ Dynamic programming
⃝ Both divide and conquer and greedy
⃝ Both dynamic programming and greedy
Solution: The bottom-up dynamic programming approach involves filling each entry in the table. There are n × m entries in the table, and each gets filled. This takes at least nm steps, which gives us a lower-bound of Ω(nm) for the algorithm. Note that O(nm) and Θ(nm) only hold if filling each entry takes constant time, and that is not always the case.
Solution: A divide and conquer algorithm divides the problem into smaller subproblems, solves each subproblem recursively, and combines the solutions to these subproblems in a meaningful way. We can see these elements in this algorithm: it divides the problem into smaller subproblems by partitioning the matrices into n/2 × n/2 matrcies. We know that the algorithm ”makes recursive calls to multiply the n/2 × n/2 submatrices,” meaning it solves these subproblems recursively. We can tell that it combines the solutions in a meaningful way because it ”adds the values gained by these recursive calls to compute the values for each of the elements in Z.”
A greedy algorithm involves making the locally optimal choice at each stage, but in this problem there is no sense of optimizing any choice.
Dynamic programming algorithms give us a way to solve optimization problems, and with them we expect to see an optimal substructure and overlapping subproblems. Then we can reuse the solutions to these subproblems. However, as stated above, we are not trying to optimize a value in this problem, meaning we do not have any sense of optimal substructure; additionally, we are not reusing subproblem solutions.
5. There are N students in a room standing in a straight line, and there are N seats that are also in a straight line parallel to the students’ line. The students can move in any direction, taking one step at a time, and each step takes one second. The task is to assign the students to seats so that the time for all students to reach their assigned seat is minimized. The algorithm finds the student that is closest to an unoccupied chair (breaking ties arbitrarily), assigns them to that chair, and then repeats the process on the remaining students. Which of the following is an accurate description for the paradigm used by this algorithm?

6. Which of
the following statements is true?
If L1 and L2 are undecidable languages then L1 ∪ L2 is undecidable. If L1 and L2 are undecidable languages then L1 ∩ L2 is undecidable. If L is undecidable, then L is unrecognizable.
If L is unrecognizable, then L is undecidable.
Divide and conquer
Dynamic programming
Both divide and conquer and greedy Both dynamic programming and greedy
Solution: At no point in the algorithm is the problem of assigning N students to N seats subdi- vided into smaller subproblems, therefore this algorithm has no divide-and conquer-aspects to it. There is no overlapping subproblems or memoization or tabulation of solutions to subproblems of any kind either, so this algorithm is certainly not a dynamic-programming algorithm.
At every step, the algorithm looks for a locally optimal decision to make (for every student, it finds the closest unoccupied chair without considering other students), therefore this is a greedy algorithm, and greedy only.
• If L1 and L2 are undecidable languages then L1 ∪ L2 is undecidable. For this choice consider LHALT and LHALT. Their union is Σ∗.
• If L1 and L2 are undecidable languages then L1 ∩ L2 is undecidable. For this choice consider LHALT and LHALT. Their intersection is ∅.
• If L is undecidable, then L is unrecognizable.
For this choice consider LHALT. It is undecidable, but recognizable.
• If L is unrecognizable, then L is undecidable.
For this choice, observe that the contrapositive is that
“If L is decidable, then L is recognizable”.
which holds as any decider for a language L is also a recognizer for the language L.
7. A finite subset of an undecidable language is (⃝ always / ⃝ sometimes /
8. Let L1 be a decidable language and L2 ⊆ L1. Then L2 is (⃝ always / decidable.
never) undecidable.
sometimes / ⃝ never)
Solution: Finite languages are always decidable: simply hard code each string in the language into a decider and accept only on those strings.

Solution: L2 is sometimes decidable.
Case 1: L2 is decidable
Consider the case where L1 is Σ∗. L1 is decidable and the Turing Machine S described below decides it.
Let L2 = ∅. L2 is a subset of L1 because every language is a subset of Σ∗. Then L2 is Σ∗ itself which is decidable as shown before.
Case 2: L2 is undecidable
For this case let L1 be Σ∗ again and let L2 = LHalt. We know that L2 is a subset of L1 because every language is a subset of Σ∗ as mentioned before. LHalt is undecidable because LHalt ≤T LHalt and LHalt is undecidable. Recall that for any language L, L ≤T L (proven in HW-4 question 5(c)).
S = “on input (⟨x⟩): 1: Accept x”
9. If A ≤T B and B is decidable, which of the following cannot be true? ⃝ A is decidable.
A is undecidable. ⃝ A is recognizable. ⃝ B ≤T A.
10. The complement of a recognizable language is (⃝ always /
sometimes / ⃝ never) recognizable.
Solution: Option 1 is true because if A ≤T B and B is decidable, A must be decidable as we can just use the decider for B to create a decider for A. Option 2 cannot be true because, as explained before, A is decidable. Option 3 is true because all decidable languages are recognizable, so we know A is recognizable. Option 4 is true because any decidable language Turing reduces to any other decidable language, so B ≤T A. Thus, Option 2 is our only option that cannot be true.
Solution: Suppose a language is recognizable and decidable. Then, its complement is decidable and therefore also recognizable.
Suppose a language is recognizable but undecidable. Then, its complement is both undecidable and unrecognizable. This follows from the fact that if a language and its complement are recognizable then the language is decidable. The contrapositive states that if a language is undecidable, then at most one of the language and its complement is recognizable. If the language is recognizable, then its complement therefore must be unrecognizable.
We have shown a case where the complement of a recognizable language is recognizable and a case where it is unrecognizable. Therefore, the answer is sometimes.
11. The language
L = {⟨x⟩ : x is the name of a UMich student taking EECS 376 in Fall 2020}

Decidable and recognizable.
⃝ Undecidable and recognizable.
⃝ Undecidable and unrecognizable.
⃝ Decidable and unrecognizable.
⃝ None of the other choices apply.
Solution: The language L is a set of names, which is finite. One can construct a program M that decides L by going through the set and determining if a given input x is in the set or not. Because the set is finite, M always halts. M accepts an input x if x ∈ L and rejects if x ̸∈ L. Therefore, because a decider exists for this language, language L is decidable.
A language is recognizable if there is a recognizer that accepts the input x if x ∈ L and either rejects or loops if x ̸∈ L. The decider fits the definition of a recognizer: M accepts if x ∈ L and rejects if x ̸∈ L. In other words, a decider is a recognizer for the language they decide. So language L is recognizable.
Thus, language L is decidable and recognizable.
12. Let S be an uncountable set of languages over {0, 1}. Let L ∈ S. Then L is (⃝ always / / ⃝ never) decidable.
Solution: Let S be the set of all languages. As we saw earlier, S is uncountable. Yet, ∅ ∈ S, which is a decidable language. At the same time, LACC ∈ S, which is undecidable.
13. Consider the following function, which takes in non-negative integers a and b that are powers of two.
Which of the following functions can be used as a potential function to prove that the algorithm ALG halts on all valid inputs a, b?
None of these functions works as a potential function for ALG.
1: function ALG(a,b)
2: if a=1orb=1ora=bthenreturn0
3: if a > b then return ALG(a/2, 2b)
4: if b > a then return ALG(2a, b/2)
Solution: To use the potential method, it would have to be the case that we can create a function with a and b as inputs such that the potential decreases on each iteration of ALG. Nevertheless, if we set a = 4, b = 8, then after one iteration, we have a = 8, b = 4, but after two iterations we again have a = 4, b = 8. Since a, b take on identical values in round 0 and round 2, then there cannot exist a potential function that is monotonically decreasing, therefore, no function can exist as a potential function for ALG. It is also worth noting that the algorithm does not terminate on input (4, 8) as well as many other inputs.

14. Let G be a connected, weighted graph with at least three edges and let T be an MST of G. Let G′ be defined by doubling the weight of the edge with the third-lowest weight in G, and let T′ be an MST of G′. In other words, G′ has all the same edges as G, except that the weight of the third-lowest weight edge is doubled. Then the weight of T′ is (⃝ always / sometimes / ⃝ never) higher than the weight of T.
Solution: T′ will usually have higher weight. For one example, consider a stick graph of four vertices with each of the three edges having weight 1. The MST is the whole graph, and so the weight of the MST increases from 3 to 4, when we change one of the edges to have weight 2. Formally, V = {A, B, C, D}, and E = {(A, B), (B, C), (C, D)}, and the weight of each edge is 1.
However, sometimes T will have the same weight as T′. For example, if G is a triangle graph, i.e. V = {A, B, C}, and E = {(A, B), (B, C), (A, C)}, again with all edges having weights 1, then the MST of T is any two of these edges. Thus, changing one of these edge weights does not change the weight of the MST.
15. Let T be a minimum spanning tree of a graph G. Construct G′ by adding a new vertex v to G and two edges (v, u) and (v, w) of weight 281 and 376, respectively, that connect to existing vertices u and w in G. Then T ∪ {(v, u), (v, w)} is (⃝ always / ⃝ sometimes / never) a minimum spanning tree for the new graph G′.
Solution: As T is a minimum spanning tree it must be “maximal without cycles”. This means that if an edge is added to it then a cycle will form in it. A minimum spanning tree cannot have a cycle in it hence T ∪ {(v, u), (v, w)} is not a minimum spanning tree.

True/False. Determine whether each of the following statements is true or false. 16. Define
M ult = {(a, b, c) : a · b = c}
MST = {(G,k) : G has a minimum spanning tree of size at least k}. Then,Mult≤T MST.
Solution: Mult ≤T MST is true.
Assume MST is decidable, then let M be a decider for MST. Here is a decider D for Mult:
D = “On input (a,b,c) :
1) Multiply a and b (using Karatsuba or any other multiplication algorithm) and assign the result to k.
2) if c = k accept, otherwise reject.”
D always halts.
(a,b,c)∈Mult =⇒ a·b=c =⇒ Daccepts
(a,b,c)∈/Mult =⇒ a·b̸=c =⇒ Drejects
We constructed a decider D for Mult using M (by not using it), therefore Mult ≤T MST.
Remark: it is worth pointing out that in general, a decidable language is Turing reducible to any other language (including undecidable languages). Therefore, in order to solve this question it is sufficient to observe that Mult is decidable.
17. Recall that si = xi + yi is the potential function discussed in lecture for the Euclidean algorithm. The algorithm can also be shown to always terminate with a different potential function pi = xi ∗ yi.
Solution: An algorithm terminates if a potential function exists that strictly decreases at each iteration, and has a lower bound. The Euclidean algorithm discussed in lecture has a lower bound ofy=0andx=1,sopi hasalowerboundof0. Ateachiterationwefindthat: x←yand y←xmody. Wefindthatpi =xi∗yi anditfollowsthatpi+1 =yi∗(yi modxi). Sinceyi modxi is in the range [0, xi), we have xi > (yi mod xi). Thus, pi decreases at every iteration (yi can only be 0 in the last iteration, and xi ∗ yi > yi ∗ (yi mod xi) when xi, yi > 0). Because the potential function has a lower bound and decreases at every iteration, the algorithm will always terminate.
18. Any graph with at least two edges of the same weight has more than one minimum spanning tree. ⃝ True

Solution: In order for this statement to be true, it must be true for all graphs. This means we can prove it is false by finding a counterexample. A counterexample is a stick graph of three vertices, where both edges have the same weight. There is only one MST, and it contains both edges.
19. A diagonalization argument can be used to show that the set of all rational numbers is uncountable. ⃝ True
20. A Turing Machine with 1047 tapes can decide more languages than a Turing Machine with 29 tapes. ⃝ True
Solution: The set of all rational numbers is provably countable (see lecture slides), so no sound technique can be used to demonstrate that it is uncountable.
Solution: Recall that we showed that a single-tape Turing machine has the same computational power as a double-tape Turing machine. This proof actually holds for any finite number of tapes. In other words, any Turing machine with any number of tapes can be simulated with a single tape Turing machine. In particular, both a Turing machine with 1047 tapes and 29 tapes can be simulated with a single-tape Turing machine. Both can also simulate a single-tape Turing machine. This implies that a Turing machine with 1047 tapes can decide the same languages as a single-tape Turing machine, which can decide the same languages as a Turing machine with 29 tapes. Thus, a Turing machine with 1047 tapes can decide the same languages as a Turing machine with 29 tapes.
21. There exists a language that is Turing reducible to uncountably many languages.
22. The language
is decidable.
L = {x | x has at least as many 0s as 1s}
Solution: A decidable language is Turing reducible to any language, including undecidable lan- guages. As there are uncountably many undecidable languages, the statement is true.
Solution: L is decidable.
We can construct a decider D for it as follows:

D = “on input (⟨x⟩):
1: 2: 3: 4: 5: 6: 7: 8: 9:
10: 11: 12:
Initialize a variable “oneCount” to 0 Initialize a variable “zeroCount” to 0 for each character c in x do
if c=1then increment oneCount
for each character c in x do if c=0then
increment zeroCount
if zeroCount ≥ oneCount then
accept x else
Recall that an input to any Turing Machine must always be finite. This is why the for loops, and thus the algorithm itself will always terminate. We can also prove the correctness of the decider:
• x ∈ L =⇒ x has at least as many 0s as 1s =⇒ zeroCount ≥ oneCount =⇒ D accepts x
• x∈/L =⇒ xdoesnothaveatleastasmany0sas1s =⇒ zeroCountCS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com