CS计算机代考程序代写 algorithm Announcements

Announcements

Announcements

• Homework 2 Due on Friday

Last Time

• Shortest Paths in Graph with Negative Edge
Weights

– Negative weight cycles

• might mean no shortest paths

– Bellman-Ford

• Computes shortest path length with ≤ k edges for
various k

• If no negative weight cycles, take k = |V|-1 to get
correct distances.

Algorithm

Bellman-Ford(G,s,ℓ)

dist
0
(v) ← ∞ for all v

//cant reach

dist
0
(s) ← 0

For k = 1 to n

For w ∈ V
dist

k
(w)←min(dist

k-1
(v)+ℓ(v,w))

dist
k
(s) ← min(dist

k
(s),0)

// s has the trivial path

Result

Proposition: If n ≥ |V|-1 and if G has no
negative weight cycles, then for all v,

dist(v) = distn(v).

Bellman-Ford computes shortest paths if no
negative weight cycles in time O(|V||E|).

Today

• Detecting Negative Weight Cycles.

• Shortest Paths in DAGs.

• Divide & Conquer

Detecting Negative Cycles

If there are no negative weight cycles, Bellman-
Ford computes shortest paths (and they might
not exist otherwise).

How do we know whether or not there are any?

Cycle Detection

Proposition: For any n ≥ |V| – 1, there are no
negative weight cycles reachable from s if and
only if for every v ∈ V

distn(v) = distn+1(v)

• Detect by running one more round of
Bellman-Ford.

• Need to see if any v’s distance changes.

Proof of “Only If”

• Suppose no negative weight cycles.

• For any n ≥ |V| – 1, distn(v) = dist(v).

• So

distn(v) = dist(v) = distn+1(v)

Proof of “If”

Suppose distn(v) = distn+1(v) for all v.

So

But if there were a negative weight cycle,
distances would decrease eventually.

Alternative Proof

• Assume distn(v) = distn+1(v) = d(v) for all v.

• d(w) = min(d(v)+ℓ(v,w)).

– d(w) ≤ d(v)+ℓ(v,w) for all (v,w) ∈ E

– ℓ(v,w) ≥ d(w) – d(v)

• Given cycle v1,v2,v3,…,vm total length of cycle is

Potential Function

• Let ℓ’(v,w) = ℓ(v,w)-d(v)+d(w) ≥ 0

– Imagine somebody lending you d(w) when you
arrive at w, but having to pay it back when you
leave.

• For any s-t path P, s,v1,v2,…,t

• Shortest paths same. Non-negative edges.

Shortest Paths in DAGs

We saw that shortest paths is harder when we
needed to deal with negative weight cycles. For
general graphs, we needed to use Bellman-Ford,
which is much slower than our other algorithms.

One way to avoid this was to make edge weights
non-negative. In this case, we could use Dijkstra.

Another way to get rid of negative weight cycles, is
to get rid of cycles. If G is a DAG, there are better
algorithms.

Fundamental Shortest Paths Formula

Hard to apply in general because there’s no
order to solve equations in.

DAG gives topological order!

Algorithm

ShortestPathsInDAGs(G,s,ℓ)

TopologicalSort(G)

For w ∈ V in topological order

If w = s, dist(w) ← 0

Else

dist(w)←min(dist(v)+ℓ(v,w))

\\ dist(v) for all upstream v

already computed

O(|V|+|E|)

O(|V|)
total

O(|E|) total

Runtime
O(|V|+|E|)

Example

s t

1

1

1

0

-1

2

0

-1
3

0

0

-1

1

1

1

0

Shortest Path Algorithms Summary

Unit Weights: Breadth First Search
O(|V|+|E|)

Non-negative Weights: Dijkstra
O(|V|log|V|+|E|)

Arbitrary Weights: Bellman-Ford O(|V||E|)

Arbitrary Weights, graph is a DAG:
Shortest-Paths-In-DAGs O(|V|+|E|)

Divide & Conquer (Ch 2)

• General Technique

• Master Theorem

• Karatsuba Multiplication

• Strassen’s Algorithm

• Merge Sort

• Order Statistics

• Binary Search

• Closest Pair of Points

Divide and Conquer

This is the first of our three major algorithmic
techniques.

1. Break problem into pieces

2. Solve pieces recursively

3. Recombine pieces to get answer

Example: Integer Multiplication

Problem: Given two n-bit numbers find their
product.

Naïve Algorithm: Schoolboy multiplication. The
binary version of the technique that you
probably learned in elementary school.

Schoolboy Multiplication

a
1

a
2

a
3

…a
n-1

a
n

x b
1

b
2

b
3

…b
n-1

b
n

———————-

a
1
b
1
a
2
b
1
a
3
b
1
…a

n-1
b
1
a

n
b
1

a
1
b
2
a
2
b
2
a
3
b
2
a
4
b
2
…a

n
b
2
0

… … … … … … …

+ a
1
b
n
a
2
b
n
a
3
b
n

a
n
b
n
0 0 …0

0

———————————–

ANSWER

Question: Runtime

What is the asymptotic runtime of the
schoolboy algorithm?

A) O(n)

B) O(n log(n))

C) O(n2)

D) O(n3)

E) O(2n)

Need to write down O(n2)
bits of numbers to add.
Addition can be done in
linear time.

Two Digit Multiplication

a b

x c d

———-

ad bd

+ ac bc 0

—————–

ac ad+bc bd

Two-Digit Multiplication

(ab) ·(cd) = [ac][bc+ad][bd]

Requires 4 one-digit multiplications and one
addition.

Trick: Compute ac, bd, (a+b)(c+d).
Note that bc+ad = (a+b)(c+d) – ac – bd.

Requires 3 one-digit multiplications and 4
addition/subtractions.

How can we apply this to larger problems?

Larger Base

a
1
a

2
a

3
…a

n/2
a

n/2+1
…a

n

x b
1
b

2
b

3
…b

n/2
b

n/2+1
…b

n

——————–

AC AD+BC BD

A B

C D

Formally

Want to multiply N and M:

1. Let X ≈ √(N+M) be a power of 2.

2. Write N = AX+B, M = CX+D

– This can be done by just taking the high and low
bits.

3. N·M = AC·X2+(AD+BC)X+BD
= AC·X2+[(A+B)(C+D)-AC-BD]X + BD

– The multiplications by X are just bit shifts.

Improved Multiplication

ImprovedMult(N,M)

Let X be a power of 2⌊log(N+M)/2⌋

Write N = AX + B, M = CX + D

P
1
← Product(A,C)

P
2
← Product(B,D)

P
3
← Product(A+B,C+D)

Return P
1
X2 + [P

3
-P

1
-P

2
]X + P

3

O(n)

O(n2)
O(n)

Runtime: O(n2). No asymptotic improvement!

More Detailed Analysis

This algorithm shows no asymptotic
improvement, but it is better.

To analyze this, lets suppose that computing the
product of two n-bit numbers using the
schoolboy algorithm takes n2 time.

Improved Multiplication

ImprovedMult(N,M)

Let X be a power of 2⌊log(N+M)/2⌋

Write N = AX + B, M = CX + D

P
1
← Product(A,C)

P
2
← Product(B,D)

P
3
← Product(A+B,C+D)

Return P
1
X2 + [P

3
-P

1
-P

2
]X + P

3

O(n)

O(n)

3(n/2)2

Runtime: (3/4)n2+O(n). Better than n2!

Further Improvements

So this trick does help. Saving a multiplication at
the cost of a few extra additions is a big deal,
when multiplications are O(n2) and additions
are O(n).

Can we do better?

Yes. Our algorithm is still using schoolboy
multiplication to do the smaller
multiplications. We can instead use our faster
algorithm.

Further Improvement

KaratsubaMult(N,M)

If N+M<99, Return Product(N,M) Let X be a power of 2⌊log(N+M)/2⌋ Write N = AX + B, M = CX + D P 1 ← Product(A,C) P 2 ← Product(B,D) P 3 ← Product(A+B,C+D) Return P 1 X2 + [P 3 -P 1 -P 2 ]X + P 3 P 1 ← KaratsubaMult(A,C) P 2 ← KaratsubaMult(B,D) P 3 ← KaratsubaMult(A+B,C+D) O(n) O(n) ???