CS计算机代考程序代写 algorithm Graph Algorithms

Graph Algorithms

● Graph Traversal (BFS/DFS)
● Dijkstra’s Algorithm
● Prim’s Algorithm
● Kruskal’s Algorithm

Graph Algorithms – Steps for Problem Solving

Easiest way to approach these problems is to figure out how to modify the graph
so that you can use an existing graph algorithm to solve your problem

● You don’t need to fully re-prove everything but there does need to be some
form of justification

Graph Algorithms 1

There are two types of professional wrestlers: “babyfaces” (“good guys”) and
“heels” (“bad guys”). Between any pair of professional wrestlers, there may or may
not be a rivalry. Suppose we have n professional wrestlers and we have a list of r
pairs of wrestlers for which there are rivalries. Give an O(n + r) time algorithm that
determines whether it is possible to designate some of the wrestlers as baby faces
and the remainder as heels such that each rivalry is between a baby face and a
heel. If it is possible to perform such a designation, your algorithm should produce
it.

Graph Algorithms 1 – Solution

1. Similar to coloring problem
2. Generate a graph G with nodes depicting wrestlers and edges when there is a

rivalry
3. To two color, we perform a breadth first search. Then, we give all the odd

ones one color say “heel”, and all the even d values a different color. • Since
the BFS took time O(n + r)and the checking took time O(r), the total runtime is
O(n + r)

Graph Algorithms 2

You are given a weighted directed graph G = (V,E,w) and the shortest path
distances d(s, u) from a source vertex s to every other vertex in G. However, you
are not given the predecessor pointers p(u) for these shortest paths. With this
information, give an algorithm to find a shortest path from s to a given vertex t in
O(|V|+|E|)

Graph Algorithms 2 – Breaking down the problem

Basically, we have the shortest distance to every vertex and now we want to use
that information to get the shortest path to every vertex.

Like with most applications of graph algorithms, there are multiple ways to do this.

Graph Algorithms 2 – Solution 1

1. Starting from node t, go through the incoming edges to t one at a time to
check if the condition d(s,t) = w(u,t) + d(s,u) is satisfied for some (u,t) in E.

2. There must be at least one node satisfying that condition, and this node u
must be a predecessor of node t.

3. Repeat this recursively (repeat the first step assuming node u was the
destination instead of node t to get the predecessor to node u, until node s is
reached.

Since each directed edge is checked at most once, the complexity is O(|V|+|E|)

Graph Algorithms 2 – Solution 2

Run a modified version of Dijkstra’s algorithm without using the priority queue.

Set S = {s}
While t not in S:

For each node u not in S, with (v, u) in E, and v in S, check if the condition
d(s,u) = w(v,u)+d(s,v) is true

If true, add u to S, and v is the predecessor of u.

Then, we can work our way back from t through the predecessor pointers to find
the shortest path from s to t. Since each directed edge is checked at most once,
the run time is O(|V|+|E|)

Graph Algorithms 3

Suppose that a graph G has a minimum spanning tree already computed. How
quickly can we update the minimum spanning tree if we add a new vertex and
incident edges to G?

Graph Algorithms 3 – Solution

● Let v
new

be the new vertex which is added to the MST and e
min

be the
minimum weighted edge connecting v

new
to the MST

● Let the new MST containing v
new

be MST’
● Clearly e

min
will be a part of MST’

● But we need to check for the other new edges from v
new

● if by adding them to MST’ and removing original edges
from MST, we get a lower weighted MST

Graph Algorithms 3 – Solution Continued

● There are a total of |V| – 1 edges in MST
● The new MST will contain a subset of the old MST and the new edges added
● At the most there can be |V| edges added
● Total edges = 2|V| – 1 = O(V)
● Therefore, we can run Prim’s or Kruskal’s on the graph G.
● This would take us O(V log V)

Greedy Algorithms

● Two general kinds of proofs:
○ Greedy stays ahead: Inductively prove that under some measure, the partial solutions

produced by the greedy algorithm “stay ahead” of the partial solutions produced by any other
algorithm

○ Exchange argument: Convert a solution produced by an optimal algorithm into the solution
produced by your greedy algorithm in a way that doesn’t worsen the solution’s quality

Greedy Algorithms – Steps for Problem Solving

1. Give Greedy Algorithm
2. Prove Correctness
3. Runtime

Greedy Example 1

A pharmacist has W pills and n empty bottles. Let {p
1
, p

2
, …, p

n
} denote the

number of pills that each bottle can hold.

Describe a greedy algorithm, which, given W and {p
1
, p

2
, …, p

n
}, determines the

fewest number of bottles needed to store the pills. Prove that your algorithm is
correct.

Greedy Example 1 – The Algorithm

● Sort the pill bottles by p
i
.

● Greedily take the bottle that can hold the most pills and fill it.
● Repeat until there are no more pills.

We aren’t done yet! We still need to prove correctness.

Greedy Example 1 – Proof of Correctness

BC: OPT agrees with us on the first 0 bottles to fill.

IH: Assume there is an optimal solution OPT that agrees with us on the first k
bottles to fill.

IS: If |OPT| > k and OPT doesn’t include our k+1st choice of bottle:

Call OPT’s k+1st choice i and our k+1st choice j. Since we chose the largest
bottle, we know p

j
≥ p

i
. Therefore, we can safely dump the content of bottle i into

bottle j to make OPT’, which agrees with us on the first k+1 bottles to fill.

Greedy Example 1 – Runtime

All we have to do is sort the n bottles, which is Θ(nlogn).

Greedy Example 2

You are given two lists of positive integers, A and B, each of size n. You may
rearrange the elements of each list however you like. Afterward, we will compute
Π

i
A

i
Bi: the product of A

i
to the power of B

i
for all i. Our goal is to have this product

be as large as possible.

Greedy Example 2 – The Algorithm

Sort A and B in the same order (both ascending or both descending, either works).

For the sake of our proof of correctness, let’s say we chose to sort them both in
ascending order

Greedy Example 2 – Proof of Correctness
BC: OPT has at most n choose 2 inversions from our solution.

IH: Assume there exists an OPT with k inversions from our solution.

IS: If k > 0, there exists an inversion in OPT. Our solution must have b
i
before b

j
,

whereas OPT has b
j
before b

i
for some i < j. Let C = Π i A i Bi in OPT’s ordering. Form OPT’ by switching the order of b i and b j . C’ = C x (a i bi x a j bj) / (a i bj x a j bi) = C x (a i bi-bj x a j bj-bi) = C x (a j bj-bi / a i bj-bi) = C x (a j /a i )bj-bi. Since a j ≥ a i and b j ≥ b i , we know that (a j /a i )bj-bi ≥ 1. Therefore, C’ ≥ C and OPT’ is an optimal solution with k-1 inversions from our solution. Divide and Conquer ● Divide ○ Usually constant time ● Conquer ○ How many sub-problem? ○ Size of each sub-problem? ● Combine ○ What is the time complexity for “merging” several subproblems? ● Runtime Analysis ○ Master’s Theorem ○ Recurrence Relation Divide And Conquer 1 Given a sequence of numbers from 1 to n, count the number of inversions relative to 1,2,..n. (E.g. if n=2 and the sequence is 2,1, then there is 1 inversion, relative to 1,2, the original sequence). Divide And Conquer 1 - Solution: ● Divide: ○ Divide the array to two pieces of the same length ○ Base Case if if the length is 1, then return 0 ● Conquer: ○ Recursively solving each sub-problem ○ Solve(0, (l+r)/2), Solve((l+r)/2+1, r) ● Merge: ○ Add those two solvers together ○ Need to Consider inversions between those two sub-arrays ■ Naive Way: O(n2) ■ Better: Two pointers to make it O(n) Divide And Conquer 1 - Solution ● Pseudo-code: Divide And Conquer 2 Consider a divide-and-conquer algorithm that splits the problem of size n into 4 sub-problems of size n/2. Assume that the dividing step takes O(n2) to run and the combining step takes O(n2 log n) to run on a problem of size n. Use any method that you know of to come up with an upper bound (as tight as possible) on the cost of this algorithm. Divide And Conquer 2 - Solution Use the generalized case 2 of Master’s Theorem. For T(n) = aT(n/b) + O(nlog a base b logk n) with k > 0, we have T(n) = Θ(nlog a base b logk+1 n).

The divide and combine steps together take O(n2 log n) time and the worst case is
that they actually take Θ(n2 logn) time. Hence the recurrence for the given
algorithm is T(n) = 4T(n/2) + Θ(n2 logn) in the worst case. Comparing with the
generalized case, a = 4, b = 2, k = 1 and so T(n) = Θ(n2 log2 n). Since this
expression for T(m) is the worst case running time, an upper bound on the running
time is O(n2 log2 n).