程序代写代做代考 algorithm C graph ≤ m – 1 edges

≤ m – 1 edges
≤ m – 1 edges
≤ m – 1 edges
≤ m – 1 edges
≤ m – 1 edges
≤ m – 1 edges
≤ m – 1 edges
≤ m – 1 edges
≤ m – 1 edges
10/12/2010
1
10/12/2010 2
10/12/2010 3
Analysis of Algorithms
Shortest paths
Shortest paths
LECTURE 24
• Nonnegative edge weights
􏰆 Dijkstra’s algorithm: O(E + V lg V)
• Nonnegative edge weights
􏰆 Dijkstra’s algorithm: O(E + V lg V)
Shortest Paths III
• General
􏰆 Bellman-Ford algorithm: O(VE)
• General
􏰆 Bellman-Ford algorithm: O(VE)
• All-pairs shortest paths
• Matrix-multiplication
• DAG
􏰆 One pass of Bellman-Ford: O(V + E)
• DAG
􏰆 One pass of Bellman-Ford: O(V + E)
algorithm
• Floyd-Warshall algorithm
• Johnson’s algorithm
All-pairs shortest paths
All-pairs shortest paths
All-pairs shortest paths
Dynamic programming
Input: Digraph G = (V, E), where V = {1, 2, …, n}, with edge-weight function w : E → R.
Input: Digraph G = (V, E), where V = {1, 2, …, n}, with edge-weight function w : E → R.
Consider the n × n weighted adjacency matrix A = (aij), where aij = w(i, j) or ∞, and define
Output: n × n matrix of shortest-path lengths δ(i,j)foralli,j∈V.
Output: n × n matrix of shortest-path lengths δ(i,j)foralli,j∈V.
dij(m) = weight of a shortest path from i to j that uses at most m edges.
10/12/2010 4
10/12/2010 5
10/12/2010 6
Proof of claim k’s dij(m) = mink{dik(m–1) + akj }
Proof of claim k’s dij(m) = mink{dik(m–1) + akj }
Proof of claim k’s dij(m) = mink{dik(m–1) + akj }
10/12/2010
7
10/12/2010
8
10/12/2010
9
Single-source shortest paths
Single-source shortest paths
IDEA:
Claim: We have
• Run Bellman-Ford once from each vertex. •Time=O(V2E).
• Dense graph (Θ(n2) edges) ⇒ Θ(n 4) time in
dij(0) = 0 if i = j, ∞ ifi≠j;
the worst case.
and for m = 1, 2, …, n – 1,
dij(m) = mink{dik(m–1) + akj }.
Good first try!
ii jj ii jj ii jj MMM
≤m–1edges
thendij ←dik +akj
≤m–1edges
thendij ←dik +akj ≤m–1edges
Relaxation!
Relaxation!
for k ← 1 to n doifdij >dik +akj
for k ← 1 to n doifdij >dik +akj
• Nonnegative edge weights
􏰆 Dijkstra’s algorithm |V| times: O(VE + V 2 lg V)
• General
􏰆 Three algorithms today.
Note: No negative-weight cycles implies δ(i, j) = dij (n–1) = dij (n) = dij (n+1) = L

10/12/2010
10
10/12/2010
11
10/12/2010
12
10/12/2010
16
10/12/2010
18
Matrix multiplication
Matrix multiplication
Matrix multiplication
Compute C = A · B, where C, A, and B are n × n matrices: n
Compute C = A · B, where C, A, and B are n × n
Compute C = A · B, where C, A, and B are n × n
cij = ∑aikbkj . k=1
n
cij = ∑aikbkj .
n
cij = ∑aikbkj .
Time = Θ(n3) using the standard algorithm.
k=1
What if we map “+” → “min” and “·” → “+”?
k=1
What if we map “+” → “min” and “·” → “+”?
Matrix multiplication
Improved matrix multiplication algorithm
Floyd-Warshall algorithm
(continued)
The (min, +) multiplication is associative, and with the real numbers, it forms an algebraic structure called a closed semiring.
Repeated squaring: A2k = Ak × Ak. Compute A2, A4, …, A2lg(n–1) .
Also dynamic programming, but faster!
Consequently, we can compute
O(lg n) squarings Note:An–1 =An =An+1 =L.
Define cij(k) = weight of a shortest path from i to j with intermediate vertices
D(1) = D(0) ·A = A1 D(2)= D(1)·A =A2
belongingtotheset{1,2,…,k}.
D(n–1) = D(n–2) · A = An–1 , yielding D(n–1) = (δ(i, j)).
diagonal for negative values in O(n) additional time.
Thus, δ(i, j) = cij 10/12/2010
(n)
. Also, cij
(0)
= aij .
M M
≤ k i≤k≤k≤k≤kj
Time = Θ(n·n3) = Θ(n4). No better than n × B-F.
10/12/2010
13
10/12/2010
14
15
Floyd-Warshall recurrence
Pseudocode for Floyd- Warshall
Transitive closure of a directed graph
cij(k) = min {cij(k–1), cik(k–1) + ckj(k–1)}
for k ← 1 to n
do for i ← 1 to n
1 if there exists a path from i to j, Compute tij = 0 otherwise.
cik(k–1)
k cij(k–1)
ckj(k–1) jj
do for j ← 1 to n
do if cij > cik + ckj
relaxation
IDEA: Use Floyd-Warshall, but with (∨, ∧) instead of (min, +):
ii
intermediate vertices in {1, 2, …, k − 1}
Notes:
t (k) = t (k–1) ∨ (t (k–1) ∧ t (k–1)).
matrices:
Time = Θ(n3) using the standard algorithm.
matrices:
Time = Θ(n3) using the standard algorithm.

Okay to omit superscripts, since extra relaxations can’t hurt.
Runs in Θ(n3) time.
Simple to code.
ij ij Time = Θ(n3).
ik kj
• • •
Efficient in practice.
10/12/2010 17
Time = Θ(n3 lg n).
To detect negative-weight cycles, check the
i
≤ k ≤ k ≤ k j
then cij ← cik + ckj
cij = mink {aik + bkj}. Thus, D(m) = D(m–1) “×” A.
 0 ∞ ∞ ∞ Identitymatrix=I= ∞ 0 ∞∞ =D0 =(d (0)).
∞ ∞ 0 ∞ ij ∞∞∞ 0

Graph reweighting
Graph reweighting
Shortest paths in reweighted graphs
Theorem. Given a function h : V → R, reweight each edge(u,v)∈E bywh(u,v)=w(u,v)+h(u)–h(v).
Theorem. Given a function h : V → R, reweight each edge(u,v)∈E bywh(u,v)=w(u,v)+h(u)–h(v).
Corollary. δh(u,v)=δ(u,v)+h(u)–h(v).
Then, for any two vertices, all paths between them are reweighted by the same amount.
Then, for any two vertices, all paths between them are reweighted by the same amount.
10/12/2010 19
10/12/2010 20
10/12/2010 21
Shortest paths in reweighted graphs
Johnson’s algorithm
Corollary. δh(u, v) = δ(u, v) + h(u) – h(v).
1.
Find a function h : V → R such that wh(u, v) ≥ 0 for all (u, v) ∈ E by using Bellman-Ford to solve the difference constraints h(v) – h(u) ≤ w(u, v), or determine that a negative-weight cycle exists.
IDEA: Find a function h : V → R such that wh(u, v) ≥ 0 for all (u, v) ∈ E. Then, run
• Time = O(V E).
Dijkstra’s algorithm from each vertex on the reweighted graph.
2.
Run Dijkstra’s algorithm using wh from each vertex u ∈ V to compute δh(u, v) for all v ∈ V.
NOTE: wh(u, v) ≥ 0 iff h(v) – h(u) ≤ w(u, v). 10/12/2010 22
• Time = O(V E + V 2 lg V).
Proof. Letp=v1 →v2 →L→vk beapathinG. We have k−1
wh(p) = ∑wh(vi,vi+1) i =1
k−1
= ∑(w(vi ,vi+1)+h(vi )−h(vi+1))
i =1 k−1
= ∑w(vi,vi+1) + h(v1) − h(vk ) Same
i =1
= w(p) + h(v1) − h(vk ) .
amount!
3. For each (u, v) ∈ V × V, compute δ(u,v)= δh(u,v)–h(u)+h(v).
• Time = O(V 2).
Total time = O(VE + V2 lg V).
10/12/2010 23