Announcements
Announcements
Homework 2 Due on Friday
Last Time
Shortest Paths in Graph with Negative Edge Weights
Negative weight cycles
might mean no shortest paths
Bellman-Ford
Computes shortest path length with ≤ k edges for various k
If no negative weight cycles, take k = |V|-1 to get correct distances.
Algorithm
Bellman-Ford(G,s,ℓ)
dist0(v) ← ∞ for all v
//cant reach
dist0(s) ← 0
For k = 1 to n
For w ∈ V
distk(w)←min(distk-1(v)+ℓ(v,w))
distk(s) ← min(distk(s),0)
// s has the trivial path
Result
Proposition: If n ≥ |V|-1 and if G has no negative weight cycles, then for all v,
dist(v) = distn(v).
Bellman-Ford computes shortest paths if no negative weight cycles in time O(|V||E|).
Today
Detecting Negative Weight Cycles.
Shortest Paths in DAGs.
Divide & Conquer
Detecting Negative Cycles
If there are no negative weight cycles, Bellman-Ford computes shortest paths (and they might not exist otherwise).
How do we know whether or not there are any?
Cycle Detection
Proposition: For any n ≥ |V| – 1, there are no negative weight cycles reachable from s if and only if for every v ∈ V
distn(v) = distn+1(v)
Detect by running one more round of Bellman-Ford.
Need to see if any v’s distance changes.
Proof of “Only If”
Suppose no negative weight cycles.
For any n ≥ |V| – 1, distn(v) = dist(v).
So
distn(v) = dist(v) = distn+1(v)
Proof of “If”
Suppose distn(v) = distn+1(v) for all v.
So
But if there were a negative weight cycle, distances would decrease eventually.
Alternative Proof
Assume distn(v) = distn+1(v) = d(v) for all v.
d(w) = min(d(v)+ℓ(v,w)).
d(w) ≤ d(v)+ℓ(v,w) for all (v,w) ∈ E
ℓ(v,w) ≥ d(w) – d(v)
Given cycle v1,v2,v3,…,vm total length of cycle is
Potential Function
Let ℓ’(v,w) = ℓ(v,w)-d(v)+d(w) ≥ 0
Imagine somebody lending you d(w) when you arrive at w, but having to pay it back when you leave.
For any s-t path P, s,v1,v2,…,t
Shortest paths same. Non-negative edges.
Shortest Paths in DAGs
We saw that shortest paths is harder when we needed to deal with negative weight cycles. For general graphs, we needed to use Bellman-Ford, which is much slower than our other algorithms.
One way to avoid this was to make edge weights non-negative. In this case, we could use Dijkstra.
Another way to get rid of negative weight cycles, is to get rid of cycles. If G is a DAG, there are better algorithms.
Fundamental Shortest Paths Formula
Hard to apply in general because there’s no order to solve equations in.
DAG gives topological order!
Algorithm
ShortestPathsInDAGs(G,s,ℓ)
TopologicalSort(G)
For w ∈ V in topological order
If w = s, dist(w) ← 0
Else
dist(w)←min(dist(v)+ℓ(v,w))
\\ dist(v) for all upstream v already computed
O(|V|+|E|)
O(|V|)
total
O(|E|) total
Runtime
O(|V|+|E|)
Example
1
1
1
0
-1
2
0
-1
3
0
0
-1
1
1
1
0
s
t
Shortest Path Algorithms Summary
Unit Weights: Breadth First Search O(|V|+|E|)
Non-negative Weights: Dijkstra O(|V|log|V|+|E|)
Arbitrary Weights: Bellman-Ford O(|V||E|)
Arbitrary Weights, graph is a DAG:
Shortest-Paths-In-DAGs O(|V|+|E|)
Divide & Conquer (Ch 2)
General Technique
Master Theorem
Karatsuba Multiplication
Strassen’s Algorithm
Merge Sort
Order Statistics
Binary Search
Closest Pair of Points
Divide and Conquer
This is the first of our three major algorithmic techniques.
Break problem into pieces
Solve pieces recursively
Recombine pieces to get answer
Example: Integer Multiplication
Problem: Given two n-bit numbers find their product.
Naïve Algorithm: Schoolboy multiplication. The binary version of the technique that you probably learned in elementary school.
Schoolboy Multiplication
a1 a2 a3 …an-1 an
x b1 b2 b3 …bn-1 bn
———————-
a1b1 a2b1 a3b1…an-1b1 anb1
a1b2 a2b2 a3b2 a4b2…anb2 0
… … … … … … …
+ a1bn a2bn a3bn… anbn 0 0 …0 0
———————————–
ANSWER
Question: Runtime
What is the asymptotic runtime of the schoolboy algorithm?
O(n)
O(n log(n))
O(n2)
O(n3)
O(2n)
Need to write down O(n2) bits of numbers to add. Addition can be done in linear time.
Two Digit Multiplication
a b
x c d
———-
ad bd
+ ac bc 0
—————–
ac ad+bc bd
Two-Digit Multiplication
(ab) ·(cd) = [ac][bc+ad][bd]
Requires 4 one-digit multiplications and one addition.
Trick: Compute ac, bd, (a+b)(c+d).
Note that bc+ad = (a+b)(c+d) – ac – bd.
Requires 3 one-digit multiplications and 4 addition/subtractions.
How can we apply this to larger problems?
Larger Base
a1 a2 a3…an/2 an/2+1…an
x b1 b2 b3…bn/2 bn/2+1…bn
——————–
AC AD+BC BD
A
B
C
D
Formally
Want to multiply N and M:
Let X ≈ √(N+M) be a power of 2.
Write N = AX+B, M = CX+D
This can be done by just taking the high and low bits.
N·M = AC·X2+(AD+BC)X+BD
= AC·X2+[(A+B)(C+D)-AC-BD]X + BD
The multiplications by X are just bit shifts.
Improved Multiplication
ImprovedMult(N,M)
Let X be a power of 2⌊log(N+M)/2⌋
Write N = AX + B, M = CX + D
P1 ← Product(A,C)
P2 ← Product(B,D)
P3 ← Product(A+B,C+D)
Return P1X2 + [P3-P1-P2]X + P3
O(n)
O(n2)
O(n)
Runtime: O(n2).
No asymptotic improvement!
More Detailed Analysis
This algorithm shows no asymptotic improvement, but it is better.
To analyze this, lets suppose that computing the product of two n-bit numbers using the schoolboy algorithm takes n2 time.
Improved Multiplication
ImprovedMult(N,M)
Let X be a power of 2⌊log(N+M)/2⌋
Write N = AX + B, M = CX + D
P1 ← Product(A,C)
P2 ← Product(B,D)
P3 ← Product(A+B,C+D)
Return P1X2 + [P3-P1-P2]X + P3
O(n)
O(n)
3(n/2)2
Runtime: (3/4)n2+O(n).
Better than n2!
Further Improvements
So this trick does help. Saving a multiplication at the cost of a few extra additions is a big deal, when multiplications are O(n2) and additions are O(n).
Can we do better?
Yes. Our algorithm is still using schoolboy multiplication to do the smaller multiplications. We can instead use our faster algorithm.
Further Improvement
KaratsubaMult(N,M)
If N+M<99, Return Product(N,M)
Let X be a power of 2⌊log(N+M)/2⌋
Write N = AX + B, M = CX + D
P1 ← Product(A,C)
P2 ← Product(B,D)
P3 ← Product(A+B,C+D)
Return P1X2 + [P3-P1-P2]X + P3
P1 ← KaratsubaMult(A,C)
P2 ← KaratsubaMult(B,D)
P3 ← KaratsubaMult(A+B,C+D)
O(n)
O(n)
???