CS计算机代考程序代写 chain algorithm CSCI 3110 Practice Final Solutions 29.11.2019

CSCI 3110 Practice Final Solutions 29.11.2019

Topics: Asymptotic notation, recurrence relations, divide-and-conquer algo-
rithms, greedy algorithms, dynamic programming, algorithms with data struc-
tures, minimum spanning trees, NP-completeness, graph algorithms

You should try all eight questions. Your best six answers will count towards
your mark. All questions are weighted equally. Good luck!

I admit, I made this too long and hard. The actual final should be easier.

1. Which are true statements?

(a) n2−� = o(n2/ log n) for any constant � > 0

(b) (lg n)lgn = nO(1)

(c)
(
n
k

)
= Θ(nk) for 1 ≤ k ≤ n

(d) 32 lgn = Ω(23 lgn)

(e) 1 + 1/2 + 1/3 + 1/4 + · · ·+ 1/n = ω(n�) for some constant � > 0

Solution: a, d

2. Suppose you have figured out how, given two n × n matrices, you can compute their
product by splitting each of them into sixteen n/4× n/4 submatrices and performing
a total of 67 additions and subtractions of n/4×n/4 matrices and k multiplications of
n/4 × n/4 matrices. The best known time bound for multiplying two n × n matrices
is O(n2.3728639); how small does k have to be for you to beat that?

Solution: 26. For k > 16 we’ll use Case 1 of the Master Theorem, so we’re looking
for the smallest value of k such that nlog4 k = o(n2.3728639). Working this out, we get
k ≤ 22·2.3728639 ≈ 26.83, so 26 is small enough and 27 isn’t. Double-checking, for

T (n) = 26T (n/4) + 67(n/4)2

we have 67(n/4)2 = O(nlog4 26) since log4 26 ≈ 2.35, so T (n) = Θ(nlog4 26).

3. Prim’s and Kruskal’s are not the only algorithms for building minimum spanning trees.
If we have many processors, we might want to use Bor̊uvka’s algorithm instead: make
each vertex a tree; until there is only 1 tree left, for each remaining tree, add the
cheapest edge with exactly one endpoint in that tree. The figure below shows the steps
of Bor̊uvka’s algorithm on the example graph from the text; there are two possibilities
because of the tie between (b, c) and (a, h).

a

b c d

e

fgh

i
4

1

2

8

4

7

2

9

a

b c d

e

fgh

i
4

8
1

11
7

2

8

6
4 14

7

2
10

9

a

b c d

e

fgh

i
4

8
1

11
7

2

8

6
4 14

7

2
10

9

a

b c d

e

fgh

i
4

8
1

2
4

7

2

9

The idea behind the proof of correctness of Bor̊uvka’s algorithm is that if we add an
edge e between two little trees T1 and T2 because e is the cheapest edge with exactly
one endpoint in T1, then any other path between T1 and T2 has a path with weight at
least as great as e: the edge e′ on that path that touches T1 must weigh at least as
much as e, or Bor̊uvka’s algorithm would add it instead of e.

Formalize this argument, using the following style and an exchange argument: “Before
we take any steps, our (empty) subsolution can be extended to an optimal solution.
Assume that after i step. . . ”.

Solution: Before we take any steps, our (empty) subsolution can be extended to an
optimal solution. Assume that after i steps, our subsolution can be extended to an
optimal solution S. Suppose the (i+ 1)st edge e we add is between trees T1 and T2 in
our subsolution after i steps, because e is the cheapest edge with exactly one endpoint
in T1. If S includes e then after i + 1 steps, our subsolution can be extended to an
optimal solution.

If S does not include e, then consider the path P between T1 and T2 in S. Notice
P must contain an edge e′ with exactly one endpoint in T1. If we delete e

′ from S,
we cut the S into two parts, one which contains T1 and the other which contains T2.
If we then add e, we reconnect those two parts, obtaining a new solution S ′. Since
cost(S ′) = cost(S) + cost(e) − cost(e′) and cost(e) ≤ cost(e′), S ′ is also an optimal
solution. Therefore, after i + 1 steps, our subsolution can be extended to an optimal
solution.

4. Suppose your prof is going for walk along an n-meter beach in Spain, at a slow but
constant rate of 1 meter per second (because he is thinking hard about a problem).
He can eat an ice-cream in 120 seconds and every 50 meters there is an ice-cream stall.
Your prof eats ice-cream to help him think so

• all the ice-cream vendors know him and he can buy ice-cream without slowing
down,

• he can eat ice-cream without changing his pace,
• he can hold only one ice-cream at a time (or he’ll be distracted from his problem).

If all ice-cream were created equal, your prof would simply use a greedy algorithm to
maximize his ice-cream consumption: he would always just buy an ice-cream whenever
he passed a stall and with his hands empty. The ice-cream sold at the ith stall has value
vi, however, and your discerning prof wants to maximize the total value of the ice-cream
he eats. For example, if n = 320 and v1 = 6, v2 = 9, v3 = 1, v4 = 5, v5 = 7, v6 = 2 then
he should buy ice-cream at the 2nd and 5th stalls, to obtain value v2 +v5 = 9+7 = 16.

Since your prof is busy thinking about his problem, help him by designing an efficient
algorithm that, given n and v1, . . . , vbn/50c (which you can assume are all positive
integers) tells him what is the maximum ice-cream value he can obtain. You need not
prove your algorithm correct.

Solution: There are actually several ways to do this. The most efficient way I can see
is to build an array A[0..bn/50c][0..3] in which row A[0] = [0, 0, 0, 0] and for 1 ≤ i ≤
bn/50c row A[i] stores

• A[i][0], the maximum value your prof can obtain from the first i stalls such that
he passes the ith stall with empty hands;

• A[i][1], the maximum value your prof can obtain from the first i stalls such that
he passes the ith stall holding 20 seconds worth of ice-cream;

• A[i][2], the maximum value your prof can obtain from the first i stalls such that
he passes the ith stall holding 70 seconds worth of ice-cream;

• A[i][3], the maximum value your prof can obtain from the first i stalls such that
he passes the ith stall holding 120 seconds worth of ice-cream (meaning he bought
it there).

For 1 ≤ i ≤ bn/50c, we fill in row A[i] with the recurrence

A[i][0] = max(A[i− 1][0], A[i− 1][1])
A[i][1] = A[i− 1][2]
A[i][2] = A[i− 1][3]
A[i][3] = max(A[i− 1][0], A[i− 1][1]) + vi .

5. For the problem Partition, you are given a set X = {x1, . . . , xn} of positive integers
and asked if there is a subset {xi1 , . . . , xik} of X such that

xi1 + · · ·+ xik = (x1 + · · ·+ xn)/2 .

(a) Show that Partition is in NP.

(b) Show that Partition is NP-hard by reducing Subset Sum to it.

(c) Show that Partition is NP-hard only in the weak sense by giving an
O(n(x1 + · · · + xn))-time dynamic-programming algorithm for it. You need not
prove your algorithm correct.

Solution:

• Partition is in NP because, if someone gives usX = {x1, . . . , xn} and a supposed
solution i1, . . . , ik, then we can check in polynomial time that

xi1 + · · ·+ xik = (x1 + · · ·+ xn)/2 .

• This part was harder than I expected! (Technically, this reduction also
doesn’t work if the instance of Subset Sum includes negative numbers.)

Given an instance (Y = {y1, . . . , ym}, t) of Subset Sum, in polynomial time we
want to build an instance X = {x1, . . . , xn} of Partition such that there exists
a subset {yj1 , . . . , yj`} of Y such that

yj1 + · · · yj` = t

if and only if there exists a subset {xi1 , . . . , xik} of X such that

xi1 + · · ·+ xik = (x1 + · · ·+ xn)/2 .

Let T = y1 + · · · + ym. If t = T/2 then we just set X = Y . Otherwise, we set
X = Y ∪ {ym+1} ∪ {ym+2}, where ym+1 = 2T + t and ym+2 = 3T − t. Notice

– T +2T + t+2T +(T − t) = 6T and any subset of X that sums to 3T includes
exactly one of ym+1 and ym+2;

– if there is a subset {yj1 , . . . , yj`} of Y that sums to t then {yj1 , . . . , yj` , ym+2}
is a subset of X that sums to 3T ;

– if there is a subset of X that includes ym+1 and sums to 3T , then the com-
plement of that subset includes ym+2 and also sums to 3T ;

– if there is a subset of X that includes ym+2 and sums to 3T , then removing
ym+2 from that subset leaves a subset of Y that sums to t.

• Let S = x1 + · · · + xn. We build a Boolean array A[0..n][0..S] with A[0..n][0] =
[TRUE, . . . ,TRUE] and A[0][1..S] = [FALSE, …,FALSE], and then fill it in such
that A[i][j] indicates whether we can select a subset of {x1, . . . , xi} that sums to
j. We use the recurrence A[i][j] = A[i − 1][j] ∨ A[i − 1][j − xi]. (If j − xi is
negative, then we consider A[i− 1][j − xi] to be FALSE.)

6. Suppose you are given an n × n grid on which there are m ≤ n points “in general
position” (meaning no two points are in the same row or column). Give an O(n log n)-
time algorithm that partitions the points into the minimum number of untangled,
descending chains (meaning that if you draw lines between consecutive points in the
same chain then no lines cross, and all the lines slope down to the right). You need
not prove your algorithm correct. The figure below shows a 7 × 7 grid with 7 points
partitioned into 3 untangled, descending chains.

Solution: We scan the grid from left to right and process the points in the order we
encounter them. For each point p, we add p to the chain whose right endpoint is as
low as possible while still being higher than p; if there is no such chain, then we make
p the start of a new chain.

7. Show that determining if there is an (n/2)-clique in a graph on n vertices is NP-
complete. (Hint: show how to convert an instance (G, k) of Clique into an instance
of this new problem, by “pumping up” k relative to G if it starts off too small or
“pumping up” G while leaving k alone if k starts off too large.)

Solution: This new problem is in NP because, if someone gives us a list of half the
vertices in a graph, in polynomial time we can check that they form a clique. To show
it’s NP-hard we will show how, given an instance (G, k) of Clique, in polynomial time
we can produce a graph G′ on n vertices containing a clique of size n/2, if and only if
G contains a clique of size k.

Suppose G is on t vertices. If 2k = t then we just set G′ = G. If 2k < t then we add t − 2k vertices with edges to each other and to all the original vertices, so if G has a clique of size k then the new graph G′ has a vertex of size k+ (t−2k) = t−k, which is half the size t+ (t− 2k) = 2t− 2k of G′, and vice versa. If 2k > t then we add 2k − t
vertices with no edges incident to them, so G has a clique of size k if and only if the
new graph G′ does, and G′ has t+ (2k − t) = 2k vertices.

8. In the three graphs below, fill in the minimum distance from vertex A to each vertex
for which that distance is defined. What algorithm should you use for each graph?

G1: Breadth-first search

G3: Bellman-Ford

5

7

2

3

4

A

A

A 4

95

1

2 4

1

25

3-6

2

1

1

4

1 -2

-4

7

3

4

3

0

G2: Dijkstra’s algorithm

1

1

1
1

1
0

2

2

2

-1
0

-2


⊥ ⊥

0
2

3

4

4 4
5

6

9