Part VI. Graph Algorithms
I
Chapter 22 Elementary graph algorithms
I
Chapter 23. Minimum spanning trees
I
Chapter 24. Single-source shortest paths
I
Chapter 25. All-pairs shortest paths
Part VI. Graph Algorithms
I
Chapter 22 Elementary graph algorithms
I
Chapter 23. Minimum spanning trees
I
Chapter 24. Single-source shortest paths
I
Chapter 25. All-pairs shortest paths
Chapter 22. Elementary graph algorithms
Chapter 22. Elementary graph algorithms
• Representations of graphs
• traverse graphs:
(1) breadth-first-search (BFS);
(2) depth-first-search (DFS)
• applications:
(1) topological sort
(2) strongly connected components
Chapter 22. Elementary graph algorithms
Chapter 22. Elementary graph algorithms
• Representations of graphs
• traverse graphs:
(1) breadth-first-search (BFS);
(2) depth-first-search (DFS)
• applications:
(1) topological sort
(2) strongly connected components
Chapter 22. Elementary graph algorithms
Chapter 22. Elementary graph algorithms
• Representations of graphs
• traverse graphs:
(1) breadth-first-search (BFS);
(2) depth-first-search (DFS)
• applications:
(1) topological sort
(2) strongly connected components
Chapter 22. Elementary graph algorithms
Chapter 22. Elementary graph algorithms
• Representations of graphs
• traverse graphs:
(1) breadth-first-search (BFS);
(2) depth-first-search (DFS)
• applications:
(1) topological sort
(2) strongly connected components
Chapter 22. Elementary graph algorithms
Chapter 22. Elementary graph algorithms
• Representations of graphs
• traverse graphs:
(1) breadth-first-search (BFS);
(2) depth-first-search (DFS)
• applications:
(1) topological sort
(2) strongly connected components
Chapter 22. Elementary graph algorithms
Chapter 22. Elementary graph algorithms
• Representations of graphs
• traverse graphs:
(1) breadth-first-search (BFS);
(2) depth-first-search (DFS)
• applications:
(1) topological sort
(2) strongly connected components
Chapter 22. Elementary graph algorithms
Chapter 22. Elementary graph algorithms
• Representations of graphs
• traverse graphs:
(1) breadth-first-search (BFS);
(2) depth-first-search (DFS)
• applications:
(1) topological sort
(2) strongly connected components
Chapter 22. Elementary graph algorithms
Chapter 22 Elementary graph algorithms
Graph: G = (V,E)
Chapter 22. Elementary graph algorithms
Terminologies and notations:
• graph G = (V,E), where V = {v1, . . . , vn} and E ✓ V ⇥ V
V = {1, 2, 3, 4, 5, 6, 7}
E = {(1, 2), (1, 3), (2, 3), (2, 5), (3, 4), (3, 5), (4, 5), (4, 6), (5, 6), (5, 7)}
• weight w : E ! R, e.g., w(1, 2) = 4, w(5, 6) = 3, etc.
• degree: deg(v) = the number of edges incident on v, e.g., deg(3) = 4, deg(7) = 4
• path: there is a path a b, if (v1, v2), . . . , (vk�1, vk) 2 E
and v1 = a and vk = b. The path is a simple path if v1, . . . vk are all di↵erent.
• cycle: when v1 = vk.
It is a self-loop, if when k = 1 and (v1, vk) 2 E.
Chapter 22. Elementary graph algorithms
Terminologies and notations:
• graph G = (V,E), where V = {v1, . . . , vn} and E ✓ V ⇥ V
V = {1, 2, 3, 4, 5, 6, 7}
E = {(1, 2), (1, 3), (2, 3), (2, 5), (3, 4), (3, 5), (4, 5), (4, 6), (5, 6), (5, 7)}
• weight w : E ! R, e.g., w(1, 2) = 4, w(5, 6) = 3, etc.
• degree: deg(v) = the number of edges incident on v, e.g., deg(3) = 4, deg(7) = 4
• path: there is a path a b, if (v1, v2), . . . , (vk�1, vk) 2 E
and v1 = a and vk = b. The path is a simple path if v1, . . . vk are all di↵erent.
• cycle: when v1 = vk.
It is a self-loop, if when k = 1 and (v1, vk) 2 E.
Chapter 22. Elementary graph algorithms
Terminologies and notations:
• graph G = (V,E), where V = {v1, . . . , vn}
and E ✓ V ⇥ V
V = {1, 2, 3, 4, 5, 6, 7}
E = {(1, 2), (1, 3), (2, 3), (2, 5), (3, 4), (3, 5), (4, 5), (4, 6), (5, 6), (5, 7)}
• weight w : E ! R, e.g., w(1, 2) = 4, w(5, 6) = 3, etc.
• degree: deg(v) = the number of edges incident on v, e.g., deg(3) = 4, deg(7) = 4
• path: there is a path a b, if (v1, v2), . . . , (vk�1, vk) 2 E
and v1 = a and vk = b. The path is a simple path if v1, . . . vk are all di↵erent.
• cycle: when v1 = vk.
It is a self-loop, if when k = 1 and (v1, vk) 2 E.
Chapter 22. Elementary graph algorithms
Terminologies and notations:
• graph G = (V,E), where V = {v1, . . . , vn} and E ✓ V ⇥ V
V = {1, 2, 3, 4, 5, 6, 7}
E = {(1, 2), (1, 3), (2, 3), (2, 5), (3, 4), (3, 5), (4, 5), (4, 6), (5, 6), (5, 7)}
• weight w : E ! R, e.g., w(1, 2) = 4, w(5, 6) = 3, etc.
• degree: deg(v) = the number of edges incident on v, e.g., deg(3) = 4, deg(7) = 4
• path: there is a path a b, if (v1, v2), . . . , (vk�1, vk) 2 E
and v1 = a and vk = b. The path is a simple path if v1, . . . vk are all di↵erent.
• cycle: when v1 = vk.
It is a self-loop, if when k = 1 and (v1, vk) 2 E.
Chapter 22. Elementary graph algorithms
Terminologies and notations:
• graph G = (V,E), where V = {v1, . . . , vn} and E ✓ V ⇥ V
V = {1, 2, 3, 4, 5, 6, 7}
E = {(1, 2), (1, 3), (2, 3), (2, 5), (3, 4), (3, 5), (4, 5), (4, 6), (5, 6), (5, 7)}
• weight w : E ! R, e.g., w(1, 2) = 4, w(5, 6) = 3, etc.
• degree: deg(v) = the number of edges incident on v, e.g., deg(3) = 4, deg(7) = 4
• path: there is a path a b, if (v1, v2), . . . , (vk�1, vk) 2 E
and v1 = a and vk = b. The path is a simple path if v1, . . . vk are all di↵erent.
• cycle: when v1 = vk.
It is a self-loop, if when k = 1 and (v1, vk) 2 E.
Chapter 22. Elementary graph algorithms
Terminologies and notations:
• graph G = (V,E), where V = {v1, . . . , vn} and E ✓ V ⇥ V
V = {1, 2, 3, 4, 5, 6, 7}
E = {(1, 2), (1, 3), (2, 3), (2, 5), (3, 4), (3, 5), (4, 5), (4, 6), (5, 6), (5, 7)}
• weight w : E ! R,
e.g., w(1, 2) = 4, w(5, 6) = 3, etc.
• degree: deg(v) = the number of edges incident on v, e.g., deg(3) = 4, deg(7) = 4
• path: there is a path a b, if (v1, v2), . . . , (vk�1, vk) 2 E
and v1 = a and vk = b. The path is a simple path if v1, . . . vk are all di↵erent.
• cycle: when v1 = vk.
It is a self-loop, if when k = 1 and (v1, vk) 2 E.
Chapter 22. Elementary graph algorithms
Terminologies and notations:
• graph G = (V,E), where V = {v1, . . . , vn} and E ✓ V ⇥ V
V = {1, 2, 3, 4, 5, 6, 7}
E = {(1, 2), (1, 3), (2, 3), (2, 5), (3, 4), (3, 5), (4, 5), (4, 6), (5, 6), (5, 7)}
• weight w : E ! R, e.g., w(1, 2) = 4, w(5, 6) = 3, etc.
• degree: deg(v) = the number of edges incident on v, e.g., deg(3) = 4, deg(7) = 4
• path: there is a path a b, if (v1, v2), . . . , (vk�1, vk) 2 E
and v1 = a and vk = b. The path is a simple path if v1, . . . vk are all di↵erent.
• cycle: when v1 = vk.
It is a self-loop, if when k = 1 and (v1, vk) 2 E.
Chapter 22. Elementary graph algorithms
Terminologies and notations:
• graph G = (V,E), where V = {v1, . . . , vn} and E ✓ V ⇥ V
V = {1, 2, 3, 4, 5, 6, 7}
E = {(1, 2), (1, 3), (2, 3), (2, 5), (3, 4), (3, 5), (4, 5), (4, 6), (5, 6), (5, 7)}
• weight w : E ! R, e.g., w(1, 2) = 4, w(5, 6) = 3, etc.
• degree: deg(v) = the number of edges incident on v, e.g., deg(3) = 4, deg(7) = 4
• path: there is a path a b, if (v1, v2), . . . , (vk�1, vk) 2 E
and v1 = a and vk = b. The path is a simple path if v1, . . . vk are all di↵erent.
• cycle: when v1 = vk.
It is a self-loop, if when k = 1 and (v1, vk) 2 E.
Chapter 22. Elementary graph algorithms
Terminologies and notations:
• graph G = (V,E), where V = {v1, . . . , vn} and E ✓ V ⇥ V
V = {1, 2, 3, 4, 5, 6, 7}
E = {(1, 2), (1, 3), (2, 3), (2, 5), (3, 4), (3, 5), (4, 5), (4, 6), (5, 6), (5, 7)}
• weight w : E ! R, e.g., w(1, 2) = 4, w(5, 6) = 3, etc.
• degree: deg(v) = the number of edges incident on v, e.g., deg(3) = 4, deg(7) = 4
• path: there is a path a b, if (v1, v2), . . . , (vk�1, vk) 2 E
and v1 = a and vk = b. The path is a simple path if v1, . . . vk are all di↵erent.
• cycle: when v1 = vk.
It is a self-loop, if when k = 1 and (v1, vk) 2 E.
Chapter 22. Elementary graph algorithms
Terminologies and notations:
• graph G = (V,E), where V = {v1, . . . , vn} and E ✓ V ⇥ V
V = {1, 2, 3, 4, 5, 6, 7}
E = {(1, 2), (1, 3), (2, 3), (2, 5), (3, 4), (3, 5), (4, 5), (4, 6), (5, 6), (5, 7)}
• weight w : E ! R, e.g., w(1, 2) = 4, w(5, 6) = 3, etc.
• degree: deg(v) = the number of edges incident on v, e.g., deg(3) = 4, deg(7) = 4
• path: there is a path a b, if (v1, v2), . . . , (vk�1, vk) 2 E
and v1 = a and vk = b.
The path is a simple path if v1, . . . vk are all di↵erent.
• cycle: when v1 = vk.
It is a self-loop, if when k = 1 and (v1, vk) 2 E.
Chapter 22. Elementary graph algorithms
Terminologies and notations:
• graph G = (V,E), where V = {v1, . . . , vn} and E ✓ V ⇥ V
V = {1, 2, 3, 4, 5, 6, 7}
E = {(1, 2), (1, 3), (2, 3), (2, 5), (3, 4), (3, 5), (4, 5), (4, 6), (5, 6), (5, 7)}
• weight w : E ! R, e.g., w(1, 2) = 4, w(5, 6) = 3, etc.
• degree: deg(v) = the number of edges incident on v, e.g., deg(3) = 4, deg(7) = 4
• path: there is a path a b, if (v1, v2), . . . , (vk�1, vk) 2 E
and v1 = a and vk = b. The path is a simple path if v1, . . . vk are all di↵erent.
• cycle: when v1 = vk.
It is a self-loop, if when k = 1 and (v1, vk) 2 E.
Chapter 22. Elementary graph algorithms
Terminologies and notations:
• graph G = (V,E), where V = {v1, . . . , vn} and E ✓ V ⇥ V
V = {1, 2, 3, 4, 5, 6, 7}
E = {(1, 2), (1, 3), (2, 3), (2, 5), (3, 4), (3, 5), (4, 5), (4, 6), (5, 6), (5, 7)}
• weight w : E ! R, e.g., w(1, 2) = 4, w(5, 6) = 3, etc.
• degree: deg(v) = the number of edges incident on v, e.g., deg(3) = 4, deg(7) = 4
• path: there is a path a b, if (v1, v2), . . . , (vk�1, vk) 2 E
and v1 = a and vk = b. The path is a simple path if v1, . . . vk are all di↵erent.
• cycle: when v1 = vk.
It is a self-loop, if when k = 1 and (v1, vk) 2 E.
Chapter 22. Elementary graph algorithms
Terminologies and notations:
• graph G = (V,E), where V = {v1, . . . , vn} and E ✓ V ⇥ V
V = {1, 2, 3, 4, 5, 6, 7}
E = {(1, 2), (1, 3), (2, 3), (2, 5), (3, 4), (3, 5), (4, 5), (4, 6), (5, 6), (5, 7)}
• weight w : E ! R, e.g., w(1, 2) = 4, w(5, 6) = 3, etc.
• degree: deg(v) = the number of edges incident on v, e.g., deg(3) = 4, deg(7) = 4
• path: there is a path a b, if (v1, v2), . . . , (vk�1, vk) 2 E
and v1 = a and vk = b. The path is a simple path if v1, . . . vk are all di↵erent.
• cycle: when v1 = vk.
It is a self-loop, if when k = 1 and (v1, vk) 2 E.
Chapter 22. Elementary graph algorithms
• digraphs: directed graphs
• complete graphs: Kn, e.g., K6
• bipartite graphs: G = (V 1 [ V2, E), V1 \ V2 = ;, K3,3,
• planar graphs: embedded in the plane without crossing edges:
However, K5 is not planar, neither is K3,3
Chapter 22. Elementary graph algorithms
• digraphs: directed graphs
• complete graphs: Kn, e.g., K6
• bipartite graphs: G = (V 1 [ V2, E), V1 \ V2 = ;, K3,3,
• planar graphs: embedded in the plane without crossing edges:
However, K5 is not planar, neither is K3,3
Chapter 22. Elementary graph algorithms
• digraphs: directed graphs
• complete graphs: Kn,
e.g., K6
• bipartite graphs: G = (V 1 [ V2, E), V1 \ V2 = ;, K3,3,
• planar graphs: embedded in the plane without crossing edges:
However, K5 is not planar, neither is K3,3
Chapter 22. Elementary graph algorithms
• digraphs: directed graphs
• complete graphs: Kn, e.g., K6
• bipartite graphs: G = (V 1 [ V2, E), V1 \ V2 = ;, K3,3,
• planar graphs: embedded in the plane without crossing edges:
However, K5 is not planar, neither is K3,3
Chapter 22. Elementary graph algorithms
• digraphs: directed graphs
• complete graphs: Kn, e.g., K6
• bipartite graphs: G = (V 1 [ V2, E), V1 \ V2 = ;,
K3,3,
• planar graphs: embedded in the plane without crossing edges:
However, K5 is not planar, neither is K3,3
Chapter 22. Elementary graph algorithms
• digraphs: directed graphs
• complete graphs: Kn, e.g., K6
• bipartite graphs: G = (V 1 [ V2, E), V1 \ V2 = ;, K3,3,
• planar graphs: embedded in the plane without crossing edges:
However, K5 is not planar, neither is K3,3
Chapter 22. Elementary graph algorithms
• digraphs: directed graphs
• complete graphs: Kn, e.g., K6
• bipartite graphs: G = (V 1 [ V2, E), V1 \ V2 = ;, K3,3,
• planar graphs: embedded in the plane without crossing edges:
However, K5 is not planar, neither is K3,3
Chapter 22. Elementary graph algorithms
• digraphs: directed graphs
• complete graphs: Kn, e.g., K6
• bipartite graphs: G = (V 1 [ V2, E), V1 \ V2 = ;, K3,3,
• planar graphs: embedded in the plane without crossing edges:
However, K5 is not planar, neither is K3,3
Chapter 22. Elementary graph algorithms
• trees: graphs that do not contain cycles; e.g.,
• k-trees:
1-tree is tree;
2-tree is a graph but with tree width = 2
Chapter 22. Elementary graph algorithms
• trees: graphs that do not contain cycles; e.g.,
• k-trees:
1-tree is tree;
2-tree is a graph but with tree width = 2
Chapter 22. Elementary graph algorithms
• trees: graphs that do not contain cycles; e.g.,
• k-trees:
1-tree is tree;
2-tree is a graph but with tree width = 2
Chapter 22. Elementary graph algorithms
• trees: graphs that do not contain cycles; e.g.,
• k-trees:
1-tree is tree;
2-tree is a graph but with tree width = 2
Chapter 22. Elementary graph algorithms
Representations of graphs
adjacency-matrix
adjacency-list
Chapter 22. Elementary graph algorithms
Representations of graphs
adjacency-matrix
adjacency-list
Chapter 22. Elementary graph algorithms
adjacency-matrix for a weighted graph
Chapter 22. Elementary graph algorithms
Traverse graphs
basic ideas of depth-first-search (DFS) and breadth-first-search (BFS)
Both methods yield ”search trees”
or ”search forest” (if the graph is not connected)
Chapter 22. Elementary graph algorithms
DFS on directed graphs, search tree
DFS on non-directed graphs, search tree
Chapter 22. Elementary graph algorithms
DFS on directed graphs, search tree
DFS on non-directed graphs, search tree
Chapter 22. Elementary graph algorithms
DFS on directed graphs, search tree
DFS on non-directed graphs, search tree
Chapter 22. Elementary graph algorithms
DFS on directed graphs, search tree
DFS on non-directed graphs, search tree
Chapter 22. Elementary graph algorithms
DFS on directed graphs, search tree
DFS on non-directed graphs, search tree
Chapter 22. Elementary graph algorithms
Traversal on graphs is an important task:
• navigating the whole graph;
• for connectivity check;
• for circle check;
• etc
DFS and BFS are two fundamental algorithms for graph traversal!
Chapter 22. Elementary graph algorithms
Traversal on graphs is an important task:
• navigating the whole graph;
• for connectivity check;
• for circle check;
• etc
DFS and BFS are two fundamental algorithms for graph traversal!
Chapter 22. Elementary graph algorithms
Traversal on graphs is an important task:
• navigating the whole graph;
• for connectivity check;
• for circle check;
• etc
DFS and BFS are two fundamental algorithms for graph traversal!
Chapter 22. Elementary graph algorithms
Traversal on graphs is an important task:
• navigating the whole graph;
• for connectivity check;
• for circle check;
• etc
DFS and BFS are two fundamental algorithms for graph traversal!
Chapter 22. Elementary graph algorithms
Traversal on graphs is an important task:
• navigating the whole graph;
• for connectivity check;
• for circle check;
• etc
DFS and BFS are two fundamental algorithms for graph traversal!
Chapter 22. Elementary graph algorithms
First recursive DFS algorithm, assuming G is connected.
Recursive-DFS(G, u);
1. if notu.visit
2. u.visit = true; { mark u ”visited” }
3. for each v 2 Adj[u] and not v.visit; { u’s unvisited neighbors }
4. v.⇡ = u; { set v’s parent to be u }
5. Recursive-DFS(G, v);
6. return ( );
How does the algorithm start?
• initially set u.visit = false for every vertex u 2 G.V ;
• s.⇡ = NULL for some specific s 2 G.V ;
• call Recursive-DFS(G, s).
But if G is not connected, what should we do?
Chapter 22. Elementary graph algorithms
First recursive DFS algorithm, assuming G is connected.
Recursive-DFS(G, u);
1. if notu.visit
2. u.visit = true; { mark u ”visited” }
3. for each v 2 Adj[u] and not v.visit; { u’s unvisited neighbors }
4. v.⇡ = u; { set v’s parent to be u }
5. Recursive-DFS(G, v);
6. return ( );
How does the algorithm start?
• initially set u.visit = false for every vertex u 2 G.V ;
• s.⇡ = NULL for some specific s 2 G.V ;
• call Recursive-DFS(G, s).
But if G is not connected, what should we do?
Chapter 22. Elementary graph algorithms
First recursive DFS algorithm, assuming G is connected.
Recursive-DFS(G, u);
1. if notu.visit
2. u.visit = true; { mark u ”visited” }
3. for each v 2 Adj[u] and not v.visit; { u’s unvisited neighbors }
4. v.⇡ = u; { set v’s parent to be u }
5. Recursive-DFS(G, v);
6. return ( );
How does the algorithm start?
• initially set u.visit = false for every vertex u 2 G.V ;
• s.⇡ = NULL for some specific s 2 G.V ;
• call Recursive-DFS(G, s).
But if G is not connected, what should we do?
Chapter 22. Elementary graph algorithms
First recursive DFS algorithm, assuming G is connected.
Recursive-DFS(G, u);
1. if notu.visit
2. u.visit = true; { mark u ”visited” }
3. for each v 2 Adj[u] and not v.visit; { u’s unvisited neighbors }
4. v.⇡ = u; { set v’s parent to be u }
5. Recursive-DFS(G, v);
6. return ( );
How does the algorithm start?
• initially set u.visit = false for every vertex u 2 G.V ;
• s.⇡ = NULL for some specific s 2 G.V ;
• call Recursive-DFS(G, s).
But if G is not connected, what should we do?
Chapter 22. Elementary graph algorithms
First recursive DFS algorithm, assuming G is connected.
Recursive-DFS(G, u);
1. if notu.visit
2. u.visit = true; { mark u ”visited” }
3. for each v 2 Adj[u] and not v.visit; { u’s unvisited neighbors }
4. v.⇡ = u; { set v’s parent to be u }
5. Recursive-DFS(G, v);
6. return ( );
How does the algorithm start?
• initially set u.visit = false for every vertex u 2 G.V ;
• s.⇡ = NULL for some specific s 2 G.V ;
• call Recursive-DFS(G, s).
But if G is not connected, what should we do?
Chapter 22. Elementary graph algorithms
First recursive DFS algorithm, assuming G is connected.
Recursive-DFS(G, u);
1. if notu.visit
2. u.visit = true; { mark u ”visited” }
3. for each v 2 Adj[u] and not v.visit; { u’s unvisited neighbors }
4. v.⇡ = u; { set v’s parent to be u }
5. Recursive-DFS(G, v);
6. return ( );
How does the algorithm start?
• initially set u.visit = false for every vertex u 2 G.V ;
• s.⇡ = NULL for some specific s 2 G.V ;
• call Recursive-DFS(G, s).
But if G is not connected, what should we do?
Chapter 22. Elementary graph algorithms
First recursive DFS algorithm, assuming G is connected.
Recursive-DFS(G, u);
1. if notu.visit
2. u.visit = true; { mark u ”visited” }
3. for each v 2 Adj[u] and not v.visit; { u’s unvisited neighbors }
4. v.⇡ = u; { set v’s parent to be u }
5. Recursive-DFS(G, v);
6. return ( );
How does the algorithm start?
• initially set u.visit = false for every vertex u 2 G.V ;
• s.⇡ = NULL for some specific s 2 G.V ;
• call Recursive-DFS(G, s).
But if G is not connected, what should we do?
Chapter 22. Elementary graph algorithms
First recursive DFS algorithm, assuming G is connected.
Recursive-DFS(G, u);
1. if notu.visit
2. u.visit = true; { mark u ”visited” }
3. for each v 2 Adj[u] and not v.visit; { u’s unvisited neighbors }
4. v.⇡ = u; { set v’s parent to be u }
5. Recursive-DFS(G, v);
6. return ( );
How does the algorithm start?
• initially set u.visit = false for every vertex u 2 G.V ;
• s.⇡ = NULL for some specific s 2 G.V ;
• call Recursive-DFS(G, s).
But if G is not connected, what should we do?
Chapter 22. Elementary graph algorithms
First recursive DFS algorithm, assuming G is connected.
Recursive-DFS(G, u);
1. if notu.visit
2. u.visit = true; { mark u ”visited” }
3. for each v 2 Adj[u] and not v.visit; { u’s unvisited neighbors }
4. v.⇡ = u; { set v’s parent to be u }
5. Recursive-DFS(G, v);
6. return ( );
How does the algorithm start?
• initially set u.visit = false for every vertex u 2 G.V ;
• s.⇡ = NULL for some specific s 2 G.V ;
• call Recursive-DFS(G, s).
But if G is not connected, what should we do?
Chapter 22. Elementary graph algorithms
First recursive DFS algorithm, assuming G is connected.
Recursive-DFS(G, u);
1. if notu.visit
2. u.visit = true; { mark u ”visited” }
3. for each v 2 Adj[u] and not v.visit; { u’s unvisited neighbors }
4. v.⇡ = u; { set v’s parent to be u }
5. Recursive-DFS(G, v);
6. return ( );
How does the algorithm start?
• initially set u.visit = false for every vertex u 2 G.V ;
• s.⇡ = NULL for some specific s 2 G.V ;
• call Recursive-DFS(G, s).
But if G is not connected, what should we do?
Chapter 22. Elementary graph algorithms
First recursive DFS algorithm, assuming G is connected.
Recursive-DFS(G, u);
1. if notu.visit
2. u.visit = true; { mark u ”visited” }
3. for each v 2 Adj[u] and not v.visit; { u’s unvisited neighbors }
4. v.⇡ = u; { set v’s parent to be u }
5. Recursive-DFS(G, v);
6. return ( );
How does the algorithm start?
• initially set u.visit = false for every vertex u 2 G.V ;
• s.⇡ = NULL for some specific s 2 G.V ;
• call Recursive-DFS(G, s).
But if G is not connected, what should we do?
Chapter 22. Elementary graph algorithms
First recursive DFS algorithm, assuming G is connected.
Recursive-DFS(G, u);
1. if notu.visit
2. u.visit = true; { mark u ”visited” }
3. for each v 2 Adj[u] and not v.visit; { u’s unvisited neighbors }
4. v.⇡ = u; { set v’s parent to be u }
5. Recursive-DFS(G, v);
6. return ( );
How does the algorithm start?
• initially set u.visit = false for every vertex u 2 G.V ;
• s.⇡ = NULL for some specific s 2 G.V ;
• call Recursive-DFS(G, s).
But if G is not connected, what should we do?
Chapter 22. Elementary graph algorithms
First recursive DFS algorithm, assuming G is connected.
Recursive-DFS(G, u);
1. if notu.visit
2. u.visit = true; { mark u ”visited” }
3. for each v 2 Adj[u] and not v.visit; { u’s unvisited neighbors }
4. v.⇡ = u; { set v’s parent to be u }
5. Recursive-DFS(G, v);
6. return ( );
How does the algorithm start?
• initially set u.visit = false for every vertex u 2 G.V ;
• s.⇡ = NULL for some specific s 2 G.V ;
• call Recursive-DFS(G, s).
But if G is not connected, what should we do?
Chapter 22. Elementary graph algorithms
To-Start-DFS(G)
1. for each s 2 G.V { initialize visit values }
2. s.visit = false;
3. s.⇡ = NULL;
4. for each s 2 G.V and not s.visit
5. Recursive-DFS(G, s)
Recursive-DFS(G, u);
1. if notu.visit
2. u.visit = true; { mark u ”visited” }
3. for each v 2 Adj[u] and not v.visit; { u’s unvisited neighbors }
4. v.⇡ = u; { set v’s parent to be u }
5. Recursive-DFS(G, v);
6. return ( );
Chapter 22. Elementary graph algorithms
To-Start-DFS(G)
1. for each s 2 G.V { initialize visit values }
2. s.visit = false;
3. s.⇡ = NULL;
4. for each s 2 G.V and not s.visit
5. Recursive-DFS(G, s)
Recursive-DFS(G, u);
1. if notu.visit
2. u.visit = true; { mark u ”visited” }
3. for each v 2 Adj[u] and not v.visit; { u’s unvisited neighbors }
4. v.⇡ = u; { set v’s parent to be u }
5. Recursive-DFS(G, v);
6. return ( );
Chapter 22. Elementary graph algorithms
To-Start-DFS(G)
1. for each s 2 G.V { initialize visit values }
2. s.visit = false;
3. s.⇡ = NULL;
4. for each s 2 G.V and not s.visit
5. Recursive-DFS(G, s)
Recursive-DFS(G, u);
1. if notu.visit
2. u.visit = true; { mark u ”visited” }
3. for each v 2 Adj[u] and not v.visit; { u’s unvisited neighbors }
4. v.⇡ = u; { set v’s parent to be u }
5. Recursive-DFS(G, v);
6. return ( );
Chapter 22. Elementary graph algorithms
To-Start-DFS(G)
1. for each s 2 G.V { initialize visit values }
2. s.visit = false;
3. s.⇡ = NULL;
4. for each s 2 G.V and not s.visit
5. Recursive-DFS(G, s)
Recursive-DFS(G, u);
1. if notu.visit
2. u.visit = true; { mark u ”visited” }
3. for each v 2 Adj[u] and not v.visit; { u’s unvisited neighbors }
4. v.⇡ = u; { set v’s parent to be u }
5. Recursive-DFS(G, v);
6. return ( );
Chapter 22. Elementary graph algorithms
To-Start-DFS(G)
1. for each s 2 G.V { initialize visit values }
2. s.visit = false;
3. s.⇡ = NULL;
4. for each s 2 G.V and not s.visit
5. Recursive-DFS(G, s)
Recursive-DFS(G, u);
1. if notu.visit
2. u.visit = true; { mark u ”visited” }
3. for each v 2 Adj[u] and not v.visit; { u’s unvisited neighbors }
4. v.⇡ = u; { set v’s parent to be u }
5. Recursive-DFS(G, v);
6. return ( );
Chapter 22. Elementary graph algorithms
To-Start-DFS(G)
1. for each s 2 G.V { initialize visit values }
2. s.visit = false;
3. s.⇡ = NULL;
4. for each s 2 G.V and not s.visit
5. Recursive-DFS(G, s)
Recursive-DFS(G, u);
1. if notu.visit
2. u.visit = true; { mark u ”visited” }
3. for each v 2 Adj[u] and not v.visit; { u’s unvisited neighbors }
4. v.⇡ = u; { set v’s parent to be u }
5. Recursive-DFS(G, v);
6. return ( );
Chapter 22. Elementary graph algorithms
DFS (from the textbook) computes discover and finish time stamps
(u.d and u.f) for every visited vertex u.
Chapter 22. Elementary graph algorithms
!: edge being explored;
!: edge path taken by DFS
Chapter 22. Elementary graph algorithms
!: edge being explored;
!: edge path taken by DFS
Chapter 22. Elementary graph algorithms
!: edge being explored;
!: edge path taken by DFS
Chapter 22. Elementary graph algorithms
!: edge being explored;
!: edge path taken by DFS
Chapter 22. Elementary graph algorithms
Chapter 22. Elementary graph algorithms
Another example of DFS execution (page 605)
Chapter 22. Elementary graph algorithms
Time complexity of DFS algorithm
⇥(|E|+ |V |), where |E| is the number of edges in G.
Chapter 22. Elementary graph algorithms
Time complexity of DFS algorithm
⇥(|E|+ |V |), where |E| is the number of edges in G.
Chapter 22. Elementary graph algorithms
Time complexity of DFS algorithm
⇥(|E|+ |V |), where |E| is the number of edges in G.
Chapter 22. Elementary graph algorithms
Properties of depth-first-search:
(1) u = v.⇡ i↵ DFS-Visit(G, v) is called.
(2) Theorem 22.7 (Parenthesis Theorem): for any u, v, exactly
one of the following three conditions holds:
• [u.d, u.f ] and [v.d, v.f ] are entirely disjoint, and neither u
nor v is a descendant of the other in the search tree.
• [u.d, u.f ] is contained entirely within [v.d, v.f ]
and u is a descendant of v, or
• [v.d, v.f ] is contained entirely within
[u.d, u.f ] and v is a descendant of u.
Corollary 22.8 (Nesting of descendants’ intervals) Vertex v is a proper descendant of
u in the depth-first search forest if and only if u.d < v.d < v.f < u.f .
Chapter 22. Elementary graph algorithms
Properties of depth-first-search:
(1) u = v.⇡
i↵ DFS-Visit(G, v) is called.
(2) Theorem 22.7 (Parenthesis Theorem): for any u, v, exactly
one of the following three conditions holds:
• [u.d, u.f ] and [v.d, v.f ] are entirely disjoint, and neither u
nor v is a descendant of the other in the search tree.
• [u.d, u.f ] is contained entirely within [v.d, v.f ]
and u is a descendant of v, or
• [v.d, v.f ] is contained entirely within
[u.d, u.f ] and v is a descendant of u.
Corollary 22.8 (Nesting of descendants’ intervals) Vertex v is a proper descendant of
u in the depth-first search forest if and only if u.d < v.d < v.f < u.f .
Chapter 22. Elementary graph algorithms
Properties of depth-first-search:
(1) u = v.⇡ i↵ DFS-Visit(G, v) is called.
(2) Theorem 22.7 (Parenthesis Theorem): for any u, v, exactly
one of the following three conditions holds:
• [u.d, u.f ] and [v.d, v.f ] are entirely disjoint, and neither u
nor v is a descendant of the other in the search tree.
• [u.d, u.f ] is contained entirely within [v.d, v.f ]
and u is a descendant of v, or
• [v.d, v.f ] is contained entirely within
[u.d, u.f ] and v is a descendant of u.
Corollary 22.8 (Nesting of descendants’ intervals) Vertex v is a proper descendant of
u in the depth-first search forest if and only if u.d < v.d < v.f < u.f .
Chapter 22. Elementary graph algorithms
Properties of depth-first-search:
(1) u = v.⇡ i↵ DFS-Visit(G, v) is called.
(2) Theorem 22.7 (Parenthesis Theorem): for any u, v, exactly
one of the following three conditions holds:
• [u.d, u.f ] and [v.d, v.f ] are entirely disjoint, and neither u
nor v is a descendant of the other in the search tree.
• [u.d, u.f ] is contained entirely within [v.d, v.f ]
and u is a descendant of v, or
• [v.d, v.f ] is contained entirely within
[u.d, u.f ] and v is a descendant of u.
Corollary 22.8 (Nesting of descendants’ intervals) Vertex v is a proper descendant of
u in the depth-first search forest if and only if u.d < v.d < v.f < u.f .
Chapter 22. Elementary graph algorithms
Properties of depth-first-search:
(1) u = v.⇡ i↵ DFS-Visit(G, v) is called.
(2) Theorem 22.7 (Parenthesis Theorem): for any u, v, exactly
one of the following three conditions holds:
• [u.d, u.f ] and [v.d, v.f ] are entirely disjoint, and neither u
nor v is a descendant of the other in the search tree.
• [u.d, u.f ] is contained entirely within [v.d, v.f ]
and u is a descendant of v, or
• [v.d, v.f ] is contained entirely within
[u.d, u.f ] and v is a descendant of u.
Corollary 22.8 (Nesting of descendants’ intervals) Vertex v is a proper descendant of
u in the depth-first search forest if and only if u.d < v.d < v.f < u.f .
Chapter 22. Elementary graph algorithms
Properties of depth-first-search:
(1) u = v.⇡ i↵ DFS-Visit(G, v) is called.
(2) Theorem 22.7 (Parenthesis Theorem): for any u, v, exactly
one of the following three conditions holds:
• [u.d, u.f ] and [v.d, v.f ] are entirely disjoint, and neither u
nor v is a descendant of the other in the search tree.
• [u.d, u.f ] is contained entirely within [v.d, v.f ]
and u is a descendant of v, or
• [v.d, v.f ] is contained entirely within
[u.d, u.f ] and v is a descendant of u.
Corollary 22.8 (Nesting of descendants’ intervals) Vertex v is a proper descendant of
u in the depth-first search forest if and only if u.d < v.d < v.f < u.f .
Chapter 22. Elementary graph algorithms
Properties of depth-first-search:
(1) u = v.⇡ i↵ DFS-Visit(G, v) is called.
(2) Theorem 22.7 (Parenthesis Theorem): for any u, v, exactly
one of the following three conditions holds:
• [u.d, u.f ] and [v.d, v.f ] are entirely disjoint, and neither u
nor v is a descendant of the other in the search tree.
• [u.d, u.f ] is contained entirely within [v.d, v.f ]
and u is a descendant of v, or
• [v.d, v.f ] is contained entirely within
[u.d, u.f ] and v is a descendant of u.
Corollary 22.8 (Nesting of descendants’ intervals) Vertex v is a proper descendant of
u in the depth-first search forest if and only if u.d < v.d < v.f < u.f .
Chapter 22. Elementary graph algorithms
Properties of depth-first-search:
(1) u = v.⇡ i↵ DFS-Visit(G, v) is called.
(2) Theorem 22.7 (Parenthesis Theorem): for any u, v, exactly
one of the following three conditions holds:
• [u.d, u.f ] and [v.d, v.f ] are entirely disjoint, and neither u
nor v is a descendant of the other in the search tree.
• [u.d, u.f ] is contained entirely within [v.d, v.f ]
and u is a descendant of v, or
• [v.d, v.f ] is contained entirely within
[u.d, u.f ] and v is a descendant of u.
Corollary 22.8 (Nesting of descendants’ intervals) Vertex v is a proper descendant of
u in the depth-first search forest if and only if u.d < v.d < v.f < u.f .
Chapter 22. Elementary graph algorithms
Chapter 22. Elementary graph algorithms
Theorem 22.9 (White-path Theorem) In a depth-first search forest of graph G,
vertex v is a descendant of u if and only if at the time u.d that the search discovers u,
there is a path from u to v consisting of entirely of white vertices.
Proof: )
case 1: u = v, apparently the claim is true;
case 2: v is a proper descendant of u, use Corollary 22.8 on every vertex
on the path from u to v; the claim is true;
(
Assume that at the time u.d, there is a path from u to v as stated in the theorem
but v is not descendant of u.
Assume (w, x) be an edge on the path and x is the first vertex on the path which
is not descendant of u. Note that w is a descendant of u (or just u)
Because at time u.d x is of WHITE color, u.d < x.d.
Because (w, x) is an edge, there are two possible scenarios:
(1) when (w, x) is being explored, x has already been discovered;
we thus have x.d < w.f ;
(2) when (w, x) is being explored, x has WHITE color but will then be discovered
we thus also have x.d < w.f ;
According to Corollary 22.8, u.d < x.d < w.f < u.f . Thus, u.d < x.d < u.f
By Theorem 22.7, the interval [x.d, x.f ] is entirely contained within interval
[u.d, u.f ]. Therefore, x is a descendant of u. Contradicts to the earlier assumption.
v should be a descendant of u.
Chapter 22. Elementary graph algorithms
Theorem 22.9 (White-path Theorem) In a depth-first search forest of graph G,
vertex v is a descendant of u if and only if at the time u.d that the search discovers u,
there is a path from u to v consisting of entirely of white vertices.
Proof: )
case 1: u = v, apparently the claim is true;
case 2: v is a proper descendant of u, use Corollary 22.8 on every vertex
on the path from u to v; the claim is true;
(
Assume that at the time u.d, there is a path from u to v as stated in the theorem
but v is not descendant of u.
Assume (w, x) be an edge on the path and x is the first vertex on the path which
is not descendant of u. Note that w is a descendant of u (or just u)
Because at time u.d x is of WHITE color, u.d < x.d.
Because (w, x) is an edge, there are two possible scenarios:
(1) when (w, x) is being explored, x has already been discovered;
we thus have x.d < w.f ;
(2) when (w, x) is being explored, x has WHITE color but will then be discovered
we thus also have x.d < w.f ;
According to Corollary 22.8, u.d < x.d < w.f < u.f . Thus, u.d < x.d < u.f
By Theorem 22.7, the interval [x.d, x.f ] is entirely contained within interval
[u.d, u.f ]. Therefore, x is a descendant of u. Contradicts to the earlier assumption.
v should be a descendant of u.
Chapter 22. Elementary graph algorithms
Theorem 22.9 (White-path Theorem) In a depth-first search forest of graph G,
vertex v is a descendant of u if and only if at the time u.d that the search discovers u,
there is a path from u to v consisting of entirely of white vertices.
Proof: )
case 1: u = v, apparently the claim is true;
case 2: v is a proper descendant of u, use Corollary 22.8 on every vertex
on the path from u to v; the claim is true;
(
Assume that at the time u.d, there is a path from u to v as stated in the theorem
but v is not descendant of u.
Assume (w, x) be an edge on the path and x is the first vertex on the path which
is not descendant of u. Note that w is a descendant of u (or just u)
Because at time u.d x is of WHITE color, u.d < x.d.
Because (w, x) is an edge, there are two possible scenarios:
(1) when (w, x) is being explored, x has already been discovered;
we thus have x.d < w.f ;
(2) when (w, x) is being explored, x has WHITE color but will then be discovered
we thus also have x.d < w.f ;
According to Corollary 22.8, u.d < x.d < w.f < u.f . Thus, u.d < x.d < u.f
By Theorem 22.7, the interval [x.d, x.f ] is entirely contained within interval
[u.d, u.f ]. Therefore, x is a descendant of u. Contradicts to the earlier assumption.
v should be a descendant of u.
Chapter 22. Elementary graph algorithms
Theorem 22.9 (White-path Theorem) In a depth-first search forest of graph G,
vertex v is a descendant of u if and only if at the time u.d that the search discovers u,
there is a path from u to v consisting of entirely of white vertices.
Proof: )
case 1: u = v, apparently the claim is true;
case 2: v is a proper descendant of u, use Corollary 22.8 on every vertex
on the path from u to v; the claim is true;
(
Assume that at the time u.d, there is a path from u to v as stated in the theorem
but v is not descendant of u.
Assume (w, x) be an edge on the path and x is the first vertex on the path which
is not descendant of u. Note that w is a descendant of u (or just u)
Because at time u.d x is of WHITE color, u.d < x.d.
Because (w, x) is an edge, there are two possible scenarios:
(1) when (w, x) is being explored, x has already been discovered;
we thus have x.d < w.f ;
(2) when (w, x) is being explored, x has WHITE color but will then be discovered
we thus also have x.d < w.f ;
According to Corollary 22.8, u.d < x.d < w.f < u.f . Thus, u.d < x.d < u.f
By Theorem 22.7, the interval [x.d, x.f ] is entirely contained within interval
[u.d, u.f ]. Therefore, x is a descendant of u. Contradicts to the earlier assumption.
v should be a descendant of u.
Chapter 22. Elementary graph algorithms
Theorem 22.9 (White-path Theorem) In a depth-first search forest of graph G,
vertex v is a descendant of u if and only if at the time u.d that the search discovers u,
there is a path from u to v consisting of entirely of white vertices.
Proof: )
case 1: u = v, apparently the claim is true;
case 2: v is a proper descendant of u, use Corollary 22.8 on every vertex
on the path from u to v; the claim is true;
(
Assume that at the time u.d, there is a path from u to v as stated in the theorem
but v is not descendant of u.
Assume (w, x) be an edge on the path and x is the first vertex on the path which
is not descendant of u. Note that w is a descendant of u (or just u)
Because at time u.d x is of WHITE color, u.d < x.d.
Because (w, x) is an edge, there are two possible scenarios:
(1) when (w, x) is being explored, x has already been discovered;
we thus have x.d < w.f ;
(2) when (w, x) is being explored, x has WHITE color but will then be discovered
we thus also have x.d < w.f ;
According to Corollary 22.8, u.d < x.d < w.f < u.f . Thus, u.d < x.d < u.f
By Theorem 22.7, the interval [x.d, x.f ] is entirely contained within interval
[u.d, u.f ]. Therefore, x is a descendant of u. Contradicts to the earlier assumption.
v should be a descendant of u.
Chapter 22. Elementary graph algorithms
Theorem 22.9 (White-path Theorem) In a depth-first search forest of graph G,
vertex v is a descendant of u if and only if at the time u.d that the search discovers u,
there is a path from u to v consisting of entirely of white vertices.
Proof: )
case 1: u = v, apparently the claim is true;
case 2: v is a proper descendant of u, use Corollary 22.8 on every vertex
on the path from u to v; the claim is true;
(
Assume that at the time u.d, there is a path from u to v as stated in the theorem
but v is not descendant of u.
Assume (w, x) be an edge on the path and x is the first vertex on the path which
is not descendant of u. Note that w is a descendant of u (or just u)
Because at time u.d x is of WHITE color, u.d < x.d.
Because (w, x) is an edge, there are two possible scenarios:
(1) when (w, x) is being explored, x has already been discovered;
we thus have x.d < w.f ;
(2) when (w, x) is being explored, x has WHITE color but will then be discovered
we thus also have x.d < w.f ;
According to Corollary 22.8, u.d < x.d < w.f < u.f . Thus, u.d < x.d < u.f
By Theorem 22.7, the interval [x.d, x.f ] is entirely contained within interval
[u.d, u.f ]. Therefore, x is a descendant of u. Contradicts to the earlier assumption.
v should be a descendant of u.
Chapter 22. Elementary graph algorithms
Theorem 22.9 (White-path Theorem) In a depth-first search forest of graph G,
vertex v is a descendant of u if and only if at the time u.d that the search discovers u,
there is a path from u to v consisting of entirely of white vertices.
Proof: )
case 1: u = v, apparently the claim is true;
case 2: v is a proper descendant of u, use Corollary 22.8 on every vertex
on the path from u to v; the claim is true;
(
Assume that at the time u.d, there is a path from u to v as stated in the theorem
but v is not descendant of u.
Assume (w, x) be an edge on the path and x is the first vertex on the path which
is not descendant of u.
Note that w is a descendant of u (or just u)
Because at time u.d x is of WHITE color, u.d < x.d.
Because (w, x) is an edge, there are two possible scenarios:
(1) when (w, x) is being explored, x has already been discovered;
we thus have x.d < w.f ;
(2) when (w, x) is being explored, x has WHITE color but will then be discovered
we thus also have x.d < w.f ;
According to Corollary 22.8, u.d < x.d < w.f < u.f . Thus, u.d < x.d < u.f
By Theorem 22.7, the interval [x.d, x.f ] is entirely contained within interval
[u.d, u.f ]. Therefore, x is a descendant of u. Contradicts to the earlier assumption.
v should be a descendant of u.
Chapter 22. Elementary graph algorithms
Theorem 22.9 (White-path Theorem) In a depth-first search forest of graph G,
vertex v is a descendant of u if and only if at the time u.d that the search discovers u,
there is a path from u to v consisting of entirely of white vertices.
Proof: )
case 1: u = v, apparently the claim is true;
case 2: v is a proper descendant of u, use Corollary 22.8 on every vertex
on the path from u to v; the claim is true;
(
Assume that at the time u.d, there is a path from u to v as stated in the theorem
but v is not descendant of u.
Assume (w, x) be an edge on the path and x is the first vertex on the path which
is not descendant of u. Note that w is a descendant of u (or just u)
Because at time u.d x is of WHITE color, u.d < x.d.
Because (w, x) is an edge, there are two possible scenarios:
(1) when (w, x) is being explored, x has already been discovered;
we thus have x.d < w.f ;
(2) when (w, x) is being explored, x has WHITE color but will then be discovered
we thus also have x.d < w.f ;
According to Corollary 22.8, u.d < x.d < w.f < u.f . Thus, u.d < x.d < u.f
By Theorem 22.7, the interval [x.d, x.f ] is entirely contained within interval
[u.d, u.f ]. Therefore, x is a descendant of u. Contradicts to the earlier assumption.
v should be a descendant of u.
Chapter 22. Elementary graph algorithms
Theorem 22.9 (White-path Theorem) In a depth-first search forest of graph G,
vertex v is a descendant of u if and only if at the time u.d that the search discovers u,
there is a path from u to v consisting of entirely of white vertices.
Proof: )
case 1: u = v, apparently the claim is true;
case 2: v is a proper descendant of u, use Corollary 22.8 on every vertex
on the path from u to v; the claim is true;
(
Assume that at the time u.d, there is a path from u to v as stated in the theorem
but v is not descendant of u.
Assume (w, x) be an edge on the path and x is the first vertex on the path which
is not descendant of u. Note that w is a descendant of u (or just u)
Because at time u.d x is of WHITE color, u.d < x.d.
Because (w, x) is an edge, there are two possible scenarios:
(1) when (w, x) is being explored, x has already been discovered;
we thus have x.d < w.f ;
(2) when (w, x) is being explored, x has WHITE color but will then be discovered
we thus also have x.d < w.f ;
According to Corollary 22.8, u.d < x.d < w.f < u.f . Thus, u.d < x.d < u.f
By Theorem 22.7, the interval [x.d, x.f ] is entirely contained within interval
[u.d, u.f ]. Therefore, x is a descendant of u. Contradicts to the earlier assumption.
v should be a descendant of u.
Chapter 22. Elementary graph algorithms
Theorem 22.9 (White-path Theorem) In a depth-first search forest of graph G,
vertex v is a descendant of u if and only if at the time u.d that the search discovers u,
there is a path from u to v consisting of entirely of white vertices.
Proof: )
case 1: u = v, apparently the claim is true;
case 2: v is a proper descendant of u, use Corollary 22.8 on every vertex
on the path from u to v; the claim is true;
(
Assume that at the time u.d, there is a path from u to v as stated in the theorem
but v is not descendant of u.
Assume (w, x) be an edge on the path and x is the first vertex on the path which
is not descendant of u. Note that w is a descendant of u (or just u)
Because at time u.d x is of WHITE color, u.d < x.d.
Because (w, x) is an edge, there are two possible scenarios:
(1) when (w, x) is being explored, x has already been discovered;
we thus have x.d < w.f ;
(2) when (w, x) is being explored, x has WHITE color but will then be discovered
we thus also have x.d < w.f ;
According to Corollary 22.8, u.d < x.d < w.f < u.f . Thus, u.d < x.d < u.f
By Theorem 22.7, the interval [x.d, x.f ] is entirely contained within interval
[u.d, u.f ]. Therefore, x is a descendant of u. Contradicts to the earlier assumption.
v should be a descendant of u.
Chapter 22. Elementary graph algorithms
Theorem 22.9 (White-path Theorem) In a depth-first search forest of graph G,
vertex v is a descendant of u if and only if at the time u.d that the search discovers u,
there is a path from u to v consisting of entirely of white vertices.
Proof: )
case 1: u = v, apparently the claim is true;
case 2: v is a proper descendant of u, use Corollary 22.8 on every vertex
on the path from u to v; the claim is true;
(
Assume that at the time u.d, there is a path from u to v as stated in the theorem
but v is not descendant of u.
Assume (w, x) be an edge on the path and x is the first vertex on the path which
is not descendant of u. Note that w is a descendant of u (or just u)
Because at time u.d x is of WHITE color, u.d < x.d.
Because (w, x) is an edge, there are two possible scenarios:
(1) when (w, x) is being explored, x has already been discovered;
we thus have x.d < w.f ;
(2) when (w, x) is being explored, x has WHITE color but will then be discovered
we thus also have x.d < w.f ;
According to Corollary 22.8, u.d < x.d < w.f < u.f . Thus, u.d < x.d < u.f
By Theorem 22.7, the interval [x.d, x.f ] is entirely contained within interval
[u.d, u.f ]. Therefore, x is a descendant of u. Contradicts to the earlier assumption.
v should be a descendant of u.
Chapter 22. Elementary graph algorithms
Theorem 22.9 (White-path Theorem) In a depth-first search forest of graph G,
vertex v is a descendant of u if and only if at the time u.d that the search discovers u,
there is a path from u to v consisting of entirely of white vertices.
Proof: )
case 1: u = v, apparently the claim is true;
case 2: v is a proper descendant of u, use Corollary 22.8 on every vertex
on the path from u to v; the claim is true;
(
Assume that at the time u.d, there is a path from u to v as stated in the theorem
but v is not descendant of u.
Assume (w, x) be an edge on the path and x is the first vertex on the path which
is not descendant of u. Note that w is a descendant of u (or just u)
Because at time u.d x is of WHITE color, u.d < x.d.
Because (w, x) is an edge, there are two possible scenarios:
(1) when (w, x) is being explored, x has already been discovered;
we thus have x.d < w.f ;
(2) when (w, x) is being explored, x has WHITE color but will then be discovered
we thus also have x.d < w.f ;
According to Corollary 22.8, u.d < x.d < w.f < u.f . Thus, u.d < x.d < u.f
By Theorem 22.7, the interval [x.d, x.f ] is entirely contained within interval
[u.d, u.f ]. Therefore, x is a descendant of u. Contradicts to the earlier assumption.
v should be a descendant of u.
Chapter 22. Elementary graph algorithms
Theorem 22.9 (White-path Theorem) In a depth-first search forest of graph G,
vertex v is a descendant of u if and only if at the time u.d that the search discovers u,
there is a path from u to v consisting of entirely of white vertices.
Proof: )
case 1: u = v, apparently the claim is true;
case 2: v is a proper descendant of u, use Corollary 22.8 on every vertex
on the path from u to v; the claim is true;
(
Assume that at the time u.d, there is a path from u to v as stated in the theorem
but v is not descendant of u.
Assume (w, x) be an edge on the path and x is the first vertex on the path which
is not descendant of u. Note that w is a descendant of u (or just u)
Because at time u.d x is of WHITE color, u.d < x.d.
Because (w, x) is an edge, there are two possible scenarios:
(1) when (w, x) is being explored, x has already been discovered;
we thus have x.d < w.f ;
(2) when (w, x) is being explored, x has WHITE color but will then be discovered
we thus also have x.d < w.f ;
According to Corollary 22.8, u.d < x.d < w.f < u.f . Thus, u.d < x.d < u.f
By Theorem 22.7, the interval [x.d, x.f ] is entirely contained within interval
[u.d, u.f ]. Therefore, x is a descendant of u. Contradicts to the earlier assumption.
v should be a descendant of u.
Chapter 22. Elementary graph algorithms
Theorem 22.9 (White-path Theorem) In a depth-first search forest of graph G,
vertex v is a descendant of u if and only if at the time u.d that the search discovers u,
there is a path from u to v consisting of entirely of white vertices.
Proof: )
case 1: u = v, apparently the claim is true;
case 2: v is a proper descendant of u, use Corollary 22.8 on every vertex
on the path from u to v; the claim is true;
(
Assume that at the time u.d, there is a path from u to v as stated in the theorem
but v is not descendant of u.
Assume (w, x) be an edge on the path and x is the first vertex on the path which
is not descendant of u. Note that w is a descendant of u (or just u)
Because at time u.d x is of WHITE color, u.d < x.d.
Because (w, x) is an edge, there are two possible scenarios:
(1) when (w, x) is being explored, x has already been discovered;
we thus have x.d < w.f ;
(2) when (w, x) is being explored, x has WHITE color but will then be discovered
we thus also have x.d < w.f ;
According to Corollary 22.8, u.d < x.d < w.f < u.f .
Thus, u.d < x.d < u.f
By Theorem 22.7, the interval [x.d, x.f ] is entirely contained within interval
[u.d, u.f ]. Therefore, x is a descendant of u. Contradicts to the earlier assumption.
v should be a descendant of u.
Chapter 22. Elementary graph algorithms
Theorem 22.9 (White-path Theorem) In a depth-first search forest of graph G,
vertex v is a descendant of u if and only if at the time u.d that the search discovers u,
there is a path from u to v consisting of entirely of white vertices.
Proof: )
case 1: u = v, apparently the claim is true;
case 2: v is a proper descendant of u, use Corollary 22.8 on every vertex
on the path from u to v; the claim is true;
(
Assume that at the time u.d, there is a path from u to v as stated in the theorem
but v is not descendant of u.
Assume (w, x) be an edge on the path and x is the first vertex on the path which
is not descendant of u. Note that w is a descendant of u (or just u)
Because at time u.d x is of WHITE color, u.d < x.d.
Because (w, x) is an edge, there are two possible scenarios:
(1) when (w, x) is being explored, x has already been discovered;
we thus have x.d < w.f ;
(2) when (w, x) is being explored, x has WHITE color but will then be discovered
we thus also have x.d < w.f ;
According to Corollary 22.8, u.d < x.d < w.f < u.f . Thus, u.d < x.d < u.f
By Theorem 22.7, the interval [x.d, x.f ] is entirely contained within interval
[u.d, u.f ]. Therefore, x is a descendant of u. Contradicts to the earlier assumption.
v should be a descendant of u.
Chapter 22. Elementary graph algorithms
Theorem 22.9 (White-path Theorem) In a depth-first search forest of graph G,
vertex v is a descendant of u if and only if at the time u.d that the search discovers u,
there is a path from u to v consisting of entirely of white vertices.
Proof: )
case 1: u = v, apparently the claim is true;
case 2: v is a proper descendant of u, use Corollary 22.8 on every vertex
on the path from u to v; the claim is true;
(
Assume that at the time u.d, there is a path from u to v as stated in the theorem
but v is not descendant of u.
Assume (w, x) be an edge on the path and x is the first vertex on the path which
is not descendant of u. Note that w is a descendant of u (or just u)
Because at time u.d x is of WHITE color, u.d < x.d.
Because (w, x) is an edge, there are two possible scenarios:
(1) when (w, x) is being explored, x has already been discovered;
we thus have x.d < w.f ;
(2) when (w, x) is being explored, x has WHITE color but will then be discovered
we thus also have x.d < w.f ;
According to Corollary 22.8, u.d < x.d < w.f < u.f . Thus, u.d < x.d < u.f
By Theorem 22.7, the interval [x.d, x.f ] is entirely contained within interval
[u.d, u.f ].
Therefore, x is a descendant of u. Contradicts to the earlier assumption.
v should be a descendant of u.
Chapter 22. Elementary graph algorithms
Theorem 22.9 (White-path Theorem) In a depth-first search forest of graph G,
vertex v is a descendant of u if and only if at the time u.d that the search discovers u,
there is a path from u to v consisting of entirely of white vertices.
Proof: )
case 1: u = v, apparently the claim is true;
case 2: v is a proper descendant of u, use Corollary 22.8 on every vertex
on the path from u to v; the claim is true;
(
Assume that at the time u.d, there is a path from u to v as stated in the theorem
but v is not descendant of u.
Assume (w, x) be an edge on the path and x is the first vertex on the path which
is not descendant of u. Note that w is a descendant of u (or just u)
Because at time u.d x is of WHITE color, u.d < x.d.
Because (w, x) is an edge, there are two possible scenarios:
(1) when (w, x) is being explored, x has already been discovered;
we thus have x.d < w.f ;
(2) when (w, x) is being explored, x has WHITE color but will then be discovered
we thus also have x.d < w.f ;
According to Corollary 22.8, u.d < x.d < w.f < u.f . Thus, u.d < x.d < u.f
By Theorem 22.7, the interval [x.d, x.f ] is entirely contained within interval
[u.d, u.f ]. Therefore, x is a descendant of u. Contradicts to the earlier assumption.
v should be a descendant of u.
Chapter 22. Elementary graph algorithms
Theorem 22.9 (White-path Theorem) In a depth-first search forest of graph G,
vertex v is a descendant of u if and only if at the time u.d that the search discovers u,
there is a path from u to v consisting of entirely of white vertices.
Proof: )
case 1: u = v, apparently the claim is true;
case 2: v is a proper descendant of u, use Corollary 22.8 on every vertex
on the path from u to v; the claim is true;
(
Assume that at the time u.d, there is a path from u to v as stated in the theorem
but v is not descendant of u.
Assume (w, x) be an edge on the path and x is the first vertex on the path which
is not descendant of u. Note that w is a descendant of u (or just u)
Because at time u.d x is of WHITE color, u.d < x.d.
Because (w, x) is an edge, there are two possible scenarios:
(1) when (w, x) is being explored, x has already been discovered;
we thus have x.d < w.f ;
(2) when (w, x) is being explored, x has WHITE color but will then be discovered
we thus also have x.d < w.f ;
According to Corollary 22.8, u.d < x.d < w.f < u.f . Thus, u.d < x.d < u.f
By Theorem 22.7, the interval [x.d, x.f ] is entirely contained within interval
[u.d, u.f ]. Therefore, x is a descendant of u. Contradicts to the earlier assumption.
v should be a descendant of u.
Chapter 22. Elementary graph algorithms
Classification of edges (for directed graphs)
• tree edges: those in the search tree (forest);
(u, v) is a tree edge if v was discovered by exploring (u, v);
• back edges: those connecting a vertex to an ancestor;
a selfloop, in a directed graph, can be a back edge;
• forward edges: those connecting a vertex to a descendant;
• cross edges: all other edges;
Chapter 22. Elementary graph algorithms
Classification of edges (for directed graphs)
• tree edges: those in the search tree (forest);
(u, v) is a tree edge if v was discovered by exploring (u, v);
• back edges: those connecting a vertex to an ancestor;
a selfloop, in a directed graph, can be a back edge;
• forward edges: those connecting a vertex to a descendant;
• cross edges: all other edges;
Chapter 22. Elementary graph algorithms
Classification of edges (for directed graphs)
• tree edges: those in the search tree (forest);
(u, v) is a tree edge if v was discovered by exploring (u, v);
• back edges: those connecting a vertex to an ancestor;
a selfloop, in a directed graph, can be a back edge;
• forward edges: those connecting a vertex to a descendant;
• cross edges: all other edges;
Chapter 22. Elementary graph algorithms
Classification of edges (for directed graphs)
• tree edges: those in the search tree (forest);
(u, v) is a tree edge if v was discovered by exploring (u, v);
• back edges: those connecting a vertex to an ancestor;
a selfloop, in a directed graph, can be a back edge;
• forward edges: those connecting a vertex to a descendant;
• cross edges: all other edges;
Chapter 22. Elementary graph algorithms
Classification of edges (for directed graphs)
• tree edges: those in the search tree (forest);
(u, v) is a tree edge if v was discovered by exploring (u, v);
• back edges: those connecting a vertex to an ancestor;
a selfloop, in a directed graph, can be a back edge;
• forward edges: those connecting a vertex to a descendant;
• cross edges: all other edges;
Chapter 22. Elementary graph algorithms
Classification of edges (for directed graphs)
• tree edges: those in the search tree (forest);
(u, v) is a tree edge if v was discovered by exploring (u, v);
• back edges: those connecting a vertex to an ancestor;
a selfloop, in a directed graph, can be a back edge;
• forward edges: those connecting a vertex to a descendant;
• cross edges: all other edges;
Chapter 22. Elementary graph algorithms
Classification of edges (for directed graphs)
• tree edges: those in the search tree (forest);
(u, v) is a tree edge if v was discovered by exploring (u, v);
• back edges: those connecting a vertex to an ancestor;
a selfloop, in a directed graph, can be a back edge;
• forward edges: those connecting a vertex to a descendant;
• cross edges: all other edges;
Chapter 22. Elementary graph algorithms
Classification of edges (for directed graphs)
• tree edges: those in the search tree (forest);
(u, v) is a tree edge if v was discovered by exploring (u, v);
• back edges: those connecting a vertex to an ancestor;
a selfloop, in a directed graph, can be a back edge;
• forward edges: those connecting a vertex to a descendant;
• cross edges: all other edges;
Chapter 22. Elementary graph algorithms
To identify the type of edge (u, v) with the color of v:
WHITE: tree edge;
GRAY: back edge;
BLACK: forward or cross edge;
Chapter 22. Elementary graph algorithms
To identify the type of edge (u, v) with the color of v:
WHITE: tree edge;
GRAY: back edge;
BLACK: forward or cross edge;
Chapter 22. Elementary graph algorithms
To identify the type of edge (u, v) with the color of v:
WHITE: tree edge;
GRAY: back edge;
BLACK: forward or cross edge;
Chapter 22. Elementary graph algorithms
To identify the type of edge (u, v) with the color of v:
WHITE: tree edge;
GRAY: back edge;
BLACK: forward or cross edge;
Chapter 22. Elementary graph algorithms
To identify the type of edge (u, v) with the color of v:
WHITE: tree edge;
GRAY: back edge;
BLACK: forward or cross edge;
Chapter 22. Elementary graph algorithms
Theorem 22.10 In a depth-first search of undirected graph G, every
edge of G is either tree edge or back edge.
Proof. Let (u, v) be an edge in G. Assume in the depth-first search of
G, u.d < v.d. Then there are two scenarios:
(1) v is discovered by exploring edge (u, v), then (u, v) is a tree edge;
(2) v is discovered not through exploring edge (u, v).
Because (u, v) is an edge, v is discovered when u is in gray color.
Since u is in the adjacency list of v, (v, u) will eventually be
explored and thus a back edge.
Chapter 22. Elementary graph algorithms
Theorem 22.10 In a depth-first search of undirected graph G, every
edge of G is either tree edge or back edge.
Proof. Let (u, v) be an edge in G. Assume in the depth-first search of
G, u.d < v.d.
Then there are two scenarios:
(1) v is discovered by exploring edge (u, v), then (u, v) is a tree edge;
(2) v is discovered not through exploring edge (u, v).
Because (u, v) is an edge, v is discovered when u is in gray color.
Since u is in the adjacency list of v, (v, u) will eventually be
explored and thus a back edge.
Chapter 22. Elementary graph algorithms
Theorem 22.10 In a depth-first search of undirected graph G, every
edge of G is either tree edge or back edge.
Proof. Let (u, v) be an edge in G. Assume in the depth-first search of
G, u.d < v.d. Then there are two scenarios:
(1) v is discovered by exploring edge (u, v), then (u, v) is a tree edge;
(2) v is discovered not through exploring edge (u, v).
Because (u, v) is an edge, v is discovered when u is in gray color.
Since u is in the adjacency list of v, (v, u) will eventually be
explored and thus a back edge.
Chapter 22. Elementary graph algorithms
Theorem 22.10 In a depth-first search of undirected graph G, every
edge of G is either tree edge or back edge.
Proof. Let (u, v) be an edge in G. Assume in the depth-first search of
G, u.d < v.d. Then there are two scenarios:
(1) v is discovered by exploring edge (u, v), then (u, v) is a tree edge;
(2) v is discovered not through exploring edge (u, v).
Because (u, v) is an edge, v is discovered when u is in gray color.
Since u is in the adjacency list of v, (v, u) will eventually be
explored and thus a back edge.
Chapter 22. Elementary graph algorithms
Theorem 22.10 In a depth-first search of undirected graph G, every
edge of G is either tree edge or back edge.
Proof. Let (u, v) be an edge in G. Assume in the depth-first search of
G, u.d < v.d. Then there are two scenarios:
(1) v is discovered by exploring edge (u, v), then (u, v) is a tree edge;
(2) v is discovered not through exploring edge (u, v).
Because (u, v) is an edge, v is discovered when u is in gray color.
Since u is in the adjacency list of v, (v, u) will eventually be
explored and thus a back edge.
Chapter 22. Elementary graph algorithms
Theorem 22.10 In a depth-first search of undirected graph G, every
edge of G is either tree edge or back edge.
Proof. Let (u, v) be an edge in G. Assume in the depth-first search of
G, u.d < v.d. Then there are two scenarios:
(1) v is discovered by exploring edge (u, v), then (u, v) is a tree edge;
(2) v is discovered not through exploring edge (u, v).
Because (u, v) is an edge, v is discovered when u is in gray color.
Since u is in the adjacency list of v, (v, u) will eventually be
explored and thus a back edge.
Chapter 22. Elementary graph algorithms
Theorem 22.10 In a depth-first search of undirected graph G, every
edge of G is either tree edge or back edge.
Proof. Let (u, v) be an edge in G. Assume in the depth-first search of
G, u.d < v.d. Then there are two scenarios:
(1) v is discovered by exploring edge (u, v), then (u, v) is a tree edge;
(2) v is discovered not through exploring edge (u, v).
Because (u, v) is an edge, v is discovered when u is in gray color.
Since u is in the adjacency list of v, (v, u) will eventually be
explored and thus a back edge.
Chapter 22. Elementary graph algorithms
Breadth First Search (BFS)
Chapter 22. Elementary graph algorithms
Breadth First Search (BFS)
Chapter 22. Elementary graph algorithms
Breadth First Search Algorithm (with a queue)
Time complexity of BFS: O(|V |+ |E|)
Note: BFS can find a shortest path from s to all other nodes
(non-weighted). (Why?)
Chapter 22. Elementary graph algorithms
Breadth First Search Algorithm (with a queue)
Time complexity of BFS: O(|V |+ |E|)
Note: BFS can find a shortest path from s to all other nodes
(non-weighted). (Why?)
Chapter 22. Elementary graph algorithms
Breadth First Search Algorithm (with a queue)
Time complexity of BFS: O(|V |+ |E|)
Note: BFS can find a shortest path from s to all other nodes
(non-weighted). (Why?)
Chapter 22. Elementary graph algorithms
Breadth First Search Algorithm (with a queue)
Time complexity of BFS:
O(|V |+ |E|)
Note: BFS can find a shortest path from s to all other nodes
(non-weighted). (Why?)
Chapter 22. Elementary graph algorithms
Breadth First Search Algorithm (with a queue)
Time complexity of BFS: O(|V |+ |E|)
Note: BFS can find a shortest path from s to all other nodes
(non-weighted). (Why?)
Chapter 22. Elementary graph algorithms
Breadth First Search Algorithm (with a queue)
Time complexity of BFS: O(|V |+ |E|)
Note: BFS can find a shortest path from s to all other nodes
(non-weighted). (Why?)
Chapter 22. Elementary graph algorithms
Applications
Reachability Problem
Input: G = (V,E), and s, t 2 V , ;
Output: YES if and only there is a path s t in G.
• The problem can be solved with DFS and BFS
by search on the graph from s until t shows up.
Chapter 22. Elementary graph algorithms
Applications
Reachability Problem
Input: G = (V,E), and s, t 2 V , ;
Output: YES if and only there is a path s t in G.
• The problem can be solved with DFS and BFS
by search on the graph from s until t shows up.
Chapter 22. Elementary graph algorithms
Applications
Reachability Problem
Input: G = (V,E), and s, t 2 V , ;
Output: YES if and only there is a path s t in G.
• The problem can be solved with DFS and BFS
by search on the graph from s until t shows up.
Chapter 22. Elementary graph algorithms
Applications
Reachability Problem
Input: G = (V,E), and s, t 2 V , ;
Output: YES if and only there is a path s t in G.
• The problem can be solved with DFS and BFS
by search on the graph from s until t shows up.
Chapter 22. Elementary graph algorithms
Applications
Reachability Problem
Input: G = (V,E), and s, t 2 V , ;
Output: YES if and only there is a path s t in G.
• The problem can be solved with DFS and BFS
by search on the graph from s until t shows up.
Chapter 22. Elementary graph algorithms
Applications
Reachability Problem
Input: G = (V,E), and s, t 2 V , ;
Output: YES if and only there is a path s t in G.
• The problem can be solved with DFS and BFS
by search on the graph from s until t shows up.
Chapter 22. Elementary graph algorithms
Applications
Reachability Problem
Input: G = (V,E), and s, t 2 V , ;
Output: YES if and only there is a path s t in G.
• The problem can be solved with DFS and BFS
by search on the graph from s until t shows up.
Chapter 22. Elementary graph algorithms
Applications
Reachability Problem
Input: G = (V,E), and s, t 2 V , ;
Output: YES if and only there is a path s t in G.
• The problem can be solved with DFS and BFS
by search on the graph from s until t shows up.
Chapter 22. Elementary graph algorithms
Reachability Problem
Reachability(G, u, t);
1. u.visit = true;
2. for each v 2 Adj[u] and not v.visit;
3. if v = t then reachable = Yes; exit;
4. else v.⇡ = u;
5. Reachability(G, v, t);
6. return ( );
Main()
reachable = No;
Reachability(G, s, t);
print(reachable);
Chapter 22. Elementary graph algorithms
Reachability Problem
Reachability(G, u, t);
1. u.visit = true;
2. for each v 2 Adj[u] and not v.visit;
3. if v = t then reachable = Yes; exit;
4. else v.⇡ = u;
5. Reachability(G, v, t);
6. return ( );
Main()
reachable = No;
Reachability(G, s, t);
print(reachable);
Chapter 22. Elementary graph algorithms
Reachability Problem
Reachability(G, u, t);
1. u.visit = true;
2. for each v 2 Adj[u] and not v.visit;
3. if v = t then reachable = Yes; exit;
4. else v.⇡ = u;
5. Reachability(G, v, t);
6. return ( );
Main()
reachable = No;
Reachability(G, s, t);
print(reachable);
Chapter 22. Elementary graph algorithms
Reachability Problem
Reachability(G, u, t);
1. u.visit = true;
2. for each v 2 Adj[u] and not v.visit;
3. if v = t then reachable = Yes; exit;
4. else v.⇡ = u;
5. Reachability(G, v, t);
6. return ( );
Main()
reachable = No;
Reachability(G, s, t);
print(reachable);
Chapter 22. Elementary graph algorithms
Path Counting Problem
Input: G = (V,E), and s, t 2 V , ;
Output: the number of paths from s t in G.
• we modify Reachability to count paths.
PathCounting(G, u, t);
1. u.visit = true;
2. for each v 2 Adj[u] ;
3. if v.visit then u.c = u.c+ v.c;
4. else v.⇡ = u;
5. PathCounting(G, v, t);
6. u.c = u.c+ v.c
7. return ( );
Main()
1. for each u 2 G
2. u.c = 0;
3. PathCounting(G, s, t);
4. print (s.c)
Chapter 22. Elementary graph algorithms
Path Counting Problem
Input: G = (V,E), and s, t 2 V , ;
Output: the number of paths from s t in G.
• we modify Reachability to count paths.
PathCounting(G, u, t);
1. u.visit = true;
2. for each v 2 Adj[u] ;
3. if v.visit then u.c = u.c+ v.c;
4. else v.⇡ = u;
5. PathCounting(G, v, t);
6. u.c = u.c+ v.c
7. return ( );
Main()
1. for each u 2 G
2. u.c = 0;
3. PathCounting(G, s, t);
4. print (s.c)
Chapter 22. Elementary graph algorithms
Path Counting Problem
Input: G = (V,E), and s, t 2 V , ;
Output: the number of paths from s t in G.
• we modify Reachability to count paths.
PathCounting(G, u, t);
1. u.visit = true;
2. for each v 2 Adj[u] ;
3. if v.visit then u.c = u.c+ v.c;
4. else v.⇡ = u;
5. PathCounting(G, v, t);
6. u.c = u.c+ v.c
7. return ( );
Main()
1. for each u 2 G
2. u.c = 0;
3. PathCounting(G, s, t);
4. print (s.c)
Chapter 22. Elementary graph algorithms
Path Counting Problem
Input: G = (V,E), and s, t 2 V , ;
Output: the number of paths from s t in G.
• we modify Reachability to count paths.
PathCounting(G, u, t);
1. u.visit = true;
2. for each v 2 Adj[u] ;
3. if v.visit then u.c = u.c+ v.c;
4. else v.⇡ = u;
5. PathCounting(G, v, t);
6. u.c = u.c+ v.c
7. return ( );
Main()
1. for each u 2 G
2. u.c = 0;
3. PathCounting(G, s, t);
4. print (s.c)
Chapter 22. Elementary graph algorithms
Path Counting Problem
Input: G = (V,E), and s, t 2 V , ;
Output: the number of paths from s t in G.
• we modify Reachability to count paths.
PathCounting(G, u, t);
1. u.visit = true;
2. for each v 2 Adj[u] ;
3. if v.visit then u.c = u.c+ v.c;
4. else v.⇡ = u;
5. PathCounting(G, v, t);
6. u.c = u.c+ v.c
7. return ( );
Main()
1. for each u 2 G
2. u.c = 0;
3. PathCounting(G, s, t);
4. print (s.c)
Chapter 22. Elementary graph algorithms
Path Counting Problem
Input: G = (V,E), and s, t 2 V , ;
Output: the number of paths from s t in G.
• we modify Reachability to count paths.
PathCounting(G, u, t);
1. u.visit = true;
2. for each v 2 Adj[u] ;
3. if v.visit then u.c = u.c+ v.c;
4. else v.⇡ = u;
5. PathCounting(G, v, t);
6. u.c = u.c+ v.c
7. return ( );
Main()
1. for each u 2 G
2. u.c = 0;
3. PathCounting(G, s, t);
4. print (s.c)
Chapter 22. Elementary graph algorithms
Path Counting Problem
Input: G = (V,E), and s, t 2 V , ;
Output: the number of paths from s t in G.
• we modify Reachability to count paths.
PathCounting(G, u, t);
1. u.visit = true;
2. for each v 2 Adj[u] ;
3. if v.visit then u.c = u.c+ v.c;
4. else v.⇡ = u;
5. PathCounting(G, v, t);
6. u.c = u.c+ v.c
7. return ( );
Main()
1. for each u 2 G
2. u.c = 0;
3. PathCounting(G, s, t);
4. print (s.c)
Chapter 22. Elementary graph algorithms
Topological sorting
• On directed acyclic graphs (DAGs)
A sorted order: socks, shorts, pants, shoes, shirt, tie, belt, jacket, watch.
Chapter 22. Elementary graph algorithms
Topological sorting
• On directed acyclic graphs (DAGs)
A sorted order: socks, shorts, pants, shoes, shirt, tie, belt, jacket, watch.
Chapter 22. Elementary graph algorithms
Topological sorting
• On directed acyclic graphs (DAGs)
A sorted order: socks, shorts, pants, shoes, shirt, tie, belt, jacket, watch.
Chapter 22. Elementary graph algorithms
Topological sorting
• On directed acyclic graphs (DAGs)
A sorted order: socks, shorts, pants, shoes, shirt, tie, belt, jacket, watch.
Chapter 22. Elementary graph algorithms
• apply DFS algorithm.
• reversed order of finish times:
p, n, o, s,m, r, y, v, x, w, z, u, q, t
• Correctness proof?
Chapter 22. Elementary graph algorithms
• apply DFS algorithm.
• reversed order of finish times:
p, n, o, s,m, r, y, v, x, w, z, u, q, t
• Correctness proof?
Chapter 22. Elementary graph algorithms
• apply DFS algorithm.
• reversed order of finish times:
p, n, o, s,m, r, y, v, x, w, z, u, q, t
• Correctness proof?
Chapter 22. Elementary graph algorithms
• apply DFS algorithm.
• reversed order of finish times:
p, n, o, s,m, r, y, v, x, w, z, u, q, t
• Correctness proof?
Chapter 22. Elementary graph algorithms
• apply DFS algorithm.
• reversed order of finish times:
p, n, o, s,m, r, y, v, x, w, z, u, q, t
• Correctness proof?
Chapter 22. Elementary graph algorithms
Strongly connected components (SCC)
Let G = (V,E) be a digraph. A strongly connected component is a
maximal subgraph H = (VH , EH) of G such that for every two nodes
v, u 2 VH ,
(1) there is a directed path v u consisting of edges in EH ; and
(2) there is a directed path u v consisting of edges in EH .
Chapter 22. Elementary graph algorithms
Strongly connected components (SCC)
Let G = (V,E) be a digraph. A strongly connected component is a
maximal subgraph H = (VH , EH) of G such that for every two nodes
v, u 2 VH ,
(1) there is a directed path v u consisting of edges in EH ; and
(2) there is a directed path u v consisting of edges in EH .
Chapter 22. Elementary graph algorithms
Strongly connected components (SCC)
Let G = (V,E) be a digraph. A strongly connected component is a
maximal subgraph H = (VH , EH) of G
such that for every two nodes
v, u 2 VH ,
(1) there is a directed path v u consisting of edges in EH ; and
(2) there is a directed path u v consisting of edges in EH .
Chapter 22. Elementary graph algorithms
Strongly connected components (SCC)
Let G = (V,E) be a digraph. A strongly connected component is a
maximal subgraph H = (VH , EH) of G such that for every two nodes
v, u 2 VH ,
(1) there is a directed path v u consisting of edges in EH ; and
(2) there is a directed path u v consisting of edges in EH .
Chapter 22. Elementary graph algorithms
Strongly connected components (SCC)
Let G = (V,E) be a digraph. A strongly connected component is a
maximal subgraph H = (VH , EH) of G such that for every two nodes
v, u 2 VH ,
(1) there is a directed path v u consisting of edges in EH ; and
(2) there is a directed path u v consisting of edges in EH .
Chapter 22. Elementary graph algorithms
Strongly connected components (SCC)
Let G = (V,E) be a digraph. A strongly connected component is a
maximal subgraph H = (VH , EH) of G such that for every two nodes
v, u 2 VH ,
(1) there is a directed path v u consisting of edges in EH ; and
(2) there is a directed path u v consisting of edges in EH .
Chapter 22. Elementary graph algorithms
Strongly connected components (SCC)
Let G = (V,E) be a digraph. A strongly connected component is a
maximal subgraph H = (VH , EH) of G such that for every two nodes
v, u 2 VH ,
(1) there is a directed path v u consisting of edges in EH ; and
(2) there is a directed path u v consisting of edges in EH .
Chapter 22. Elementary graph algorithms
Idea of an algorithm to use DFS to solve SCC problem.
• use DFS to generate DFS forest; each search tree Tu (rooted at u)
consists of vertices v such that u v;
• use DFS again on Tu; hope to search from
every one v within Tu to make sure v u as well.
• however, this may be di�cult (proof is left as an exercise).
Chapter 22. Elementary graph algorithms
Idea of an algorithm to use DFS to solve SCC problem.
• use DFS to generate DFS forest; each search tree Tu (rooted at u)
consists of vertices v such that u v;
• use DFS again on Tu; hope to search from
every one v within Tu to make sure v u as well.
• however, this may be di�cult (proof is left as an exercise).
Chapter 22. Elementary graph algorithms
Idea of an algorithm to use DFS to solve SCC problem.
• use DFS to generate DFS forest;
each search tree Tu (rooted at u)
consists of vertices v such that u v;
• use DFS again on Tu; hope to search from
every one v within Tu to make sure v u as well.
• however, this may be di�cult (proof is left as an exercise).
Chapter 22. Elementary graph algorithms
Idea of an algorithm to use DFS to solve SCC problem.
• use DFS to generate DFS forest; each search tree Tu (rooted at u)
consists of vertices v such that u v;
• use DFS again on Tu; hope to search from
every one v within Tu to make sure v u as well.
• however, this may be di�cult (proof is left as an exercise).
Chapter 22. Elementary graph algorithms
Idea of an algorithm to use DFS to solve SCC problem.
• use DFS to generate DFS forest; each search tree Tu (rooted at u)
consists of vertices v such that u v;
• use DFS again on Tu;
hope to search from
every one v within Tu to make sure v u as well.
• however, this may be di�cult (proof is left as an exercise).
Chapter 22. Elementary graph algorithms
Idea of an algorithm to use DFS to solve SCC problem.
• use DFS to generate DFS forest; each search tree Tu (rooted at u)
consists of vertices v such that u v;
• use DFS again on Tu; hope to search from
every one v within Tu to make sure v u as well.
• however, this may be di�cult (proof is left as an exercise).
Chapter 22. Elementary graph algorithms
Idea of an algorithm to use DFS to solve SCC problem.
• use DFS to generate DFS forest; each search tree Tu (rooted at u)
consists of vertices v such that u v;
• use DFS again on Tu; hope to search from
every one v within Tu to make sure v u as well.
• however, this may be di�cult (proof is left as an exercise).
Chapter 22. Elementary graph algorithms
Algorithm Strongly Connected Components(G)
1. call DFS(G) to compute u.f for each u 2 G.V
2. compute GT the transpose of G { reverse all edges in G }
3. call DFS(GT ) (vertices are considered in the decreasing
order of finish times computed in step 1)
4. output each tree in the depth-first forest produced by step 3.
Chapter 22. Elementary graph algorithms
Algorithm Strongly Connected Components(G)
1. call DFS(G) to compute u.f for each u 2 G.V
2. compute GT the transpose of G { reverse all edges in G }
3. call DFS(GT ) (vertices are considered in the decreasing
order of finish times computed in step 1)
4. output each tree in the depth-first forest produced by step 3.
Chapter 22. Elementary graph algorithms
Algorithm Strongly Connected Components(G)
1. call DFS(G) to compute u.f for each u 2 G.V
2. compute GT the transpose of G { reverse all edges in G }
3. call DFS(GT ) (vertices are considered in the decreasing
order of finish times computed in step 1)
4. output each tree in the depth-first forest produced by step 3.
Chapter 22. Elementary graph algorithms
Algorithm Strongly Connected Components(G)
1. call DFS(G) to compute u.f for each u 2 G.V
2. compute GT the transpose of G
{ reverse all edges in G }
3. call DFS(GT ) (vertices are considered in the decreasing
order of finish times computed in step 1)
4. output each tree in the depth-first forest produced by step 3.
Chapter 22. Elementary graph algorithms
Algorithm Strongly Connected Components(G)
1. call DFS(G) to compute u.f for each u 2 G.V
2. compute GT the transpose of G { reverse all edges in G }
3. call DFS(GT ) (vertices are considered in the decreasing
order of finish times computed in step 1)
4. output each tree in the depth-first forest produced by step 3.
Chapter 22. Elementary graph algorithms
Algorithm Strongly Connected Components(G)
1. call DFS(G) to compute u.f for each u 2 G.V
2. compute GT the transpose of G { reverse all edges in G }
3. call DFS(GT ) (vertices are considered in the decreasing
order of finish times computed in step 1)
4. output each tree in the depth-first forest produced by step 3.
Chapter 22. Elementary graph algorithms
Algorithm Strongly Connected Components(G)
1. call DFS(G) to compute u.f for each u 2 G.V
2. compute GT the transpose of G { reverse all edges in G }
3. call DFS(GT ) (vertices are considered in the decreasing
order of finish times computed in step 1)
4. output each tree in the depth-first forest produced by step 3.
Chapter 22. Elementary graph algorithms
Algorithm Strongly Connected Components(G)
1. call DFS(G) to compute u.f for each u 2 G.V
2. compute GT the transpose of G { reverse all edges in G }
3. call DFS(GT ) (vertices are considered in the decreasing
order of finish times computed in step 1)
4. output each tree in the depth-first forest produced by step 3.
Chapter 22. Elementary graph algorithms
Ideas behind the algorithm:
• the first pass DFS results in a DFS forest, let T be a tree with root r;
for every vertex u 2 T , r u,
so an SCC can only be produced from some tree in the forest;
• let vertex v 2 T but v 6= r, we are not sure v u;
• instead, we would like to check if v r for v 2 T .
(because r u, v r implies v u);
• that is the same as to use second-DFS (starting from r)
to check if r v after edge directions are reversed;
• only those in the same second-DSF tree belongs to the same SCC.
Chapter 22. Elementary graph algorithms
Ideas behind the algorithm:
• the first pass DFS results in a DFS forest, let T be a tree with root r;
for every vertex u 2 T , r u,
so an SCC can only be produced from some tree in the forest;
• let vertex v 2 T but v 6= r, we are not sure v u;
• instead, we would like to check if v r for v 2 T .
(because r u, v r implies v u);
• that is the same as to use second-DFS (starting from r)
to check if r v after edge directions are reversed;
• only those in the same second-DSF tree belongs to the same SCC.
Chapter 22. Elementary graph algorithms
Ideas behind the algorithm:
• the first pass DFS results in a DFS forest, let T be a tree with root r;
for every vertex u 2 T , r u,
so an SCC can only be produced from some tree in the forest;
• let vertex v 2 T but v 6= r, we are not sure v u;
• instead, we would like to check if v r for v 2 T .
(because r u, v r implies v u);
• that is the same as to use second-DFS (starting from r)
to check if r v after edge directions are reversed;
• only those in the same second-DSF tree belongs to the same SCC.
Chapter 22. Elementary graph algorithms
Ideas behind the algorithm:
• the first pass DFS results in a DFS forest, let T be a tree with root r;
for every vertex u 2 T , r u,
so an SCC can only be produced from some tree in the forest;
• let vertex v 2 T but v 6= r, we are not sure v u;
• instead, we would like to check if v r for v 2 T .
(because r u, v r implies v u);
• that is the same as to use second-DFS (starting from r)
to check if r v after edge directions are reversed;
• only those in the same second-DSF tree belongs to the same SCC.
Chapter 22. Elementary graph algorithms
Ideas behind the algorithm:
• the first pass DFS results in a DFS forest, let T be a tree with root r;
for every vertex u 2 T , r u,
so an SCC can only be produced from some tree in the forest;
• let vertex v 2 T but v 6= r, we are not sure v u;
• instead, we would like to check if v r for v 2 T .
(because r u, v r implies v u);
• that is the same as to use second-DFS (starting from r)
to check if r v after edge directions are reversed;
• only those in the same second-DSF tree belongs to the same SCC.
Chapter 22. Elementary graph algorithms
Ideas behind the algorithm:
• the first pass DFS results in a DFS forest, let T be a tree with root r;
for every vertex u 2 T , r u,
so an SCC can only be produced from some tree in the forest;
• let vertex v 2 T but v 6= r, we are not sure v u;
• instead, we would like to check if v r for v 2 T .
(because r u, v r implies v u);
• that is the same as to use second-DFS (starting from r)
to check if r v after edge directions are reversed;
• only those in the same second-DSF tree belongs to the same SCC.
Chapter 22. Elementary graph algorithms
Ideas behind the algorithm:
• the first pass DFS results in a DFS forest, let T be a tree with root r;
for every vertex u 2 T , r u,
so an SCC can only be produced from some tree in the forest;
• let vertex v 2 T but v 6= r, we are not sure v u;
• instead, we would like to check if v r for v 2 T .
(because r u, v r implies v u);
• that is the same as to use second-DFS (starting from r)
to check if r v after edge directions are reversed;
• only those in the same second-DSF tree belongs to the same SCC.
Chapter 22. Elementary graph algorithms
Ideas behind the algorithm:
• the first pass DFS results in a DFS forest, let T be a tree with root r;
for every vertex u 2 T , r u,
so an SCC can only be produced from some tree in the forest;
• let vertex v 2 T but v 6= r, we are not sure v u;
• instead, we would like to check if v r for v 2 T .
(because r u, v r implies v u);
• that is the same as to use second-DFS (starting from r)
to check if r v after edge directions are reversed;
• only those in the same second-DSF tree belongs to the same SCC.
Chapter 22. Elementary graph algorithms
Ideas behind the algorithm:
• the first pass DFS results in a DFS forest, let T be a tree with root r;
for every vertex u 2 T , r u,
so an SCC can only be produced from some tree in the forest;
• let vertex v 2 T but v 6= r, we are not sure v u;
• instead, we would like to check if v r for v 2 T .
(because r u, v r implies v u);
• that is the same as to use second-DFS (starting from r)
to check if r v after edge directions are reversed;
• only those in the same second-DSF tree belongs to the same SCC.
Chapter 22. Elementary graph algorithms
Properties from algorithm Strongly Connected Components(G)
(1) Component graph: GSCC = (V SCC , ESCC) is defined as follow:
let C1, C2, . . . , Ck be k distinct SCCs for G. Then
V SCC = {v1, v2, vk};
ESCC = {(vi, vj) : 9u 2 Ci, v 2 Cj , (u, v) 2 E}.
Then GSCC is a DAG (directed acyclic graph).
Proof. Assume the opposite to the claim that, for some vi, vj 2 V SCC ,
there is a path vi vj and another path vj vi, forming a cycle in
V SCC .
By the definition of GSCC , there must be a path in G, from some vertex
in Ci to some vertex in Cj ; at the same time, there is a path in G, from
some vertex in Cj to some vertex in Ci. Then Ci and Cj should form a
single SCC, not two distinct SCCs. Contradicts.
Chapter 22. Elementary graph algorithms
Properties from algorithm Strongly Connected Components(G)
(1) Component graph: GSCC = (V SCC , ESCC) is defined as follow:
let C1, C2, . . . , Ck be k distinct SCCs for G. Then
V SCC = {v1, v2, vk};
ESCC = {(vi, vj) : 9u 2 Ci, v 2 Cj , (u, v) 2 E}.
Then GSCC is a DAG (directed acyclic graph).
Proof. Assume the opposite to the claim that, for some vi, vj 2 V SCC ,
there is a path vi vj and another path vj vi, forming a cycle in
V SCC .
By the definition of GSCC , there must be a path in G, from some vertex
in Ci to some vertex in Cj ; at the same time, there is a path in G, from
some vertex in Cj to some vertex in Ci. Then Ci and Cj should form a
single SCC, not two distinct SCCs. Contradicts.
Chapter 22. Elementary graph algorithms
Properties from algorithm Strongly Connected Components(G)
(1) Component graph: GSCC = (V SCC , ESCC) is defined as follow:
let C1, C2, . . . , Ck be k distinct SCCs for G. Then
V SCC = {v1, v2, vk};
ESCC = {(vi, vj) : 9u 2 Ci, v 2 Cj , (u, v) 2 E}.
Then GSCC is a DAG (directed acyclic graph).
Proof. Assume the opposite to the claim that, for some vi, vj 2 V SCC ,
there is a path vi vj and another path vj vi, forming a cycle in
V SCC .
By the definition of GSCC , there must be a path in G, from some vertex
in Ci to some vertex in Cj ; at the same time, there is a path in G, from
some vertex in Cj to some vertex in Ci. Then Ci and Cj should form a
single SCC, not two distinct SCCs. Contradicts.
Chapter 22. Elementary graph algorithms
Properties from algorithm Strongly Connected Components(G)
(1) Component graph: GSCC = (V SCC , ESCC) is defined as follow:
let C1, C2, . . . , Ck be k distinct SCCs for G. Then
V SCC = {v1, v2, vk};
ESCC = {(vi, vj) : 9u 2 Ci, v 2 Cj , (u, v) 2 E}.
Then GSCC is a DAG (directed acyclic graph).
Proof. Assume the opposite to the claim that, for some vi, vj 2 V SCC ,
there is a path vi vj and another path vj vi, forming a cycle in
V SCC .
By the definition of GSCC , there must be a path in G, from some vertex
in Ci to some vertex in Cj ; at the same time, there is a path in G, from
some vertex in Cj to some vertex in Ci. Then Ci and Cj should form a
single SCC, not two distinct SCCs. Contradicts.
Chapter 22. Elementary graph algorithms
Properties from algorithm Strongly Connected Components(G)
(1) Component graph: GSCC = (V SCC , ESCC) is defined as follow:
let C1, C2, . . . , Ck be k distinct SCCs for G. Then
V SCC = {v1, v2, vk};
ESCC = {(vi, vj) : 9u 2 Ci, v 2 Cj , (u, v) 2 E}.
Then GSCC is a DAG (directed acyclic graph).
Proof. Assume the opposite to the claim that, for some vi, vj 2 V SCC ,
there is a path vi vj and another path vj vi, forming a cycle in
V SCC .
By the definition of GSCC , there must be a path in G, from some vertex
in Ci to some vertex in Cj ; at the same time, there is a path in G, from
some vertex in Cj to some vertex in Ci. Then Ci and Cj should form a
single SCC, not two distinct SCCs. Contradicts.
Chapter 22. Elementary graph algorithms
Properties from algorithm Strongly Connected Components(G)
(1) Component graph: GSCC = (V SCC , ESCC) is defined as follow:
let C1, C2, . . . , Ck be k distinct SCCs for G. Then
V SCC = {v1, v2, vk};
ESCC = {(vi, vj) : 9u 2 Ci, v 2 Cj , (u, v) 2 E}.
Then GSCC is a DAG (directed acyclic graph).
Proof. Assume the opposite to the claim that, for some vi, vj 2 V SCC ,
there is a path vi vj and another path vj vi, forming a cycle in
V SCC .
By the definition of GSCC , there must be a path in G, from some vertex
in Ci to some vertex in Cj ; at the same time, there is a path in G, from
some vertex in Cj to some vertex in Ci. Then Ci and Cj should form a
single SCC, not two distinct SCCs. Contradicts.
Chapter 22. Elementary graph algorithms
Properties from algorithm Strongly Connected Components(G)
(1) Component graph: GSCC = (V SCC , ESCC) is defined as follow:
let C1, C2, . . . , Ck be k distinct SCCs for G. Then
V SCC = {v1, v2, vk};
ESCC = {(vi, vj) : 9u 2 Ci, v 2 Cj , (u, v) 2 E}.
Then GSCC is a DAG (directed acyclic graph).
Proof. Assume the opposite to the claim that, for some vi, vj 2 V SCC ,
there is a path vi vj and another path vj vi, forming a cycle in
V SCC .
By the definition of GSCC , there must be a path in G, from some vertex
in Ci to some vertex in Cj ; at the same time, there is a path in G, from
some vertex in Cj to some vertex in Ci. Then Ci and Cj should form a
single SCC, not two distinct SCCs. Contradicts.
Chapter 22. Elementary graph algorithms
Properties from algorithm Strongly Connected Components(G)
(1) Component graph: GSCC = (V SCC , ESCC) is defined as follow:
let C1, C2, . . . , Ck be k distinct SCCs for G. Then
V SCC = {v1, v2, vk};
ESCC = {(vi, vj) : 9u 2 Ci, v 2 Cj , (u, v) 2 E}.
Then GSCC is a DAG (directed acyclic graph).
Proof. Assume the opposite to the claim that, for some vi, vj 2 V SCC ,
there is a path vi vj and another path vj vi, forming a cycle in
V SCC .
By the definition of GSCC , there must be a path in G, from some vertex
in Ci to some vertex in Cj ; at the same time, there is a path in G, from
some vertex in Cj to some vertex in Ci. Then Ci and Cj should form a
single SCC, not two distinct SCCs. Contradicts.
Chapter 22. Elementary graph algorithms
Let C be a SCC, define f(C) = maxu2C{u.f}, (with the finish times
from the first DFS call).
(2) Lemma 22.14: Let C and C 0 be distinct strongly connected
components for directed graph G. If (u, v) 2 E, where u,2 C and
v 2 C 0, then f(C) > f(C 0).
Proof: Assume the opposite, i.e., f(C) < f(C 0). Then there must be vertices x 2 C and y 2 C 0 such that x.f < y.f . Now consider the first DFS call, there are two situations: (1) y was searched before x: by property (1) there is no path from y to x, x.f > y.f .
(2) y was search after x:
since there is a path from x to y because of (u, v), x.f > y.f .
Both cases contradicts the assumption. So f(C) > f(C 0).
Chapter 22. Elementary graph algorithms
Let C be a SCC, define f(C) = maxu2C{u.f}, (with the finish times
from the first DFS call).
(2) Lemma 22.14: Let C and C 0 be distinct strongly connected
components for directed graph G. If (u, v) 2 E, where u,2 C and
v 2 C 0, then f(C) > f(C 0).
Proof: Assume the opposite, i.e., f(C) < f(C 0). Then there must be vertices x 2 C and y 2 C 0 such that x.f < y.f . Now consider the first DFS call, there are two situations: (1) y was searched before x: by property (1) there is no path from y to x, x.f > y.f .
(2) y was search after x:
since there is a path from x to y because of (u, v), x.f > y.f .
Both cases contradicts the assumption. So f(C) > f(C 0).
Chapter 22. Elementary graph algorithms
Let C be a SCC, define f(C) = maxu2C{u.f}, (with the finish times
from the first DFS call).
(2) Lemma 22.14: Let C and C 0 be distinct strongly connected
components for directed graph G. If (u, v) 2 E, where u,2 C and
v 2 C 0, then f(C) > f(C 0).
Proof: Assume the opposite, i.e., f(C) < f(C 0). Then there must be vertices x 2 C and y 2 C 0 such that x.f < y.f . Now consider the first DFS call, there are two situations: (1) y was searched before x: by property (1) there is no path from y to x, x.f > y.f .
(2) y was search after x:
since there is a path from x to y because of (u, v), x.f > y.f .
Both cases contradicts the assumption. So f(C) > f(C 0).
Chapter 22. Elementary graph algorithms
Let C be a SCC, define f(C) = maxu2C{u.f}, (with the finish times
from the first DFS call).
(2) Lemma 22.14: Let C and C 0 be distinct strongly connected
components for directed graph G. If (u, v) 2 E, where u,2 C and
v 2 C 0, then f(C) > f(C 0).
Proof: Assume the opposite, i.e., f(C) < f(C 0). Then there must be vertices x 2 C and y 2 C 0 such that x.f < y.f . Now consider the first DFS call, there are two situations: (1) y was searched before x: by property (1) there is no path from y to x, x.f > y.f .
(2) y was search after x:
since there is a path from x to y because of (u, v), x.f > y.f .
Both cases contradicts the assumption. So f(C) > f(C 0).
Chapter 22. Elementary graph algorithms
Let C be a SCC, define f(C) = maxu2C{u.f}, (with the finish times
from the first DFS call).
(2) Lemma 22.14: Let C and C 0 be distinct strongly connected
components for directed graph G. If (u, v) 2 E, where u,2 C and
v 2 C 0, then f(C) > f(C 0).
Proof: Assume the opposite, i.e., f(C) < f(C 0). Then there must be vertices x 2 C and y 2 C 0 such that x.f < y.f . Now consider the first DFS call, there are two situations: (1) y was searched before x: by property (1) there is no path from y to x, x.f > y.f .
(2) y was search after x:
since there is a path from x to y because of (u, v), x.f > y.f .
Both cases contradicts the assumption. So f(C) > f(C 0).
Chapter 22. Elementary graph algorithms
Let C be a SCC, define f(C) = maxu2C{u.f}, (with the finish times
from the first DFS call).
(2) Lemma 22.14: Let C and C 0 be distinct strongly connected
components for directed graph G. If (u, v) 2 E, where u,2 C and
v 2 C 0, then f(C) > f(C 0).
Proof: Assume the opposite, i.e., f(C) < f(C 0). Then there must be vertices x 2 C and y 2 C 0 such that x.f < y.f . Now consider the first DFS call, there are two situations: (1) y was searched before x: by property (1) there is no path from y to x, x.f > y.f .
(2) y was search after x:
since there is a path from x to y because of (u, v), x.f > y.f .
Both cases contradicts the assumption. So f(C) > f(C 0).
Chapter 22. Elementary graph algorithms
Let C be a SCC, define f(C) = maxu2C{u.f}, (with the finish times
from the first DFS call).
(2) Lemma 22.14: Let C and C 0 be distinct strongly connected
components for directed graph G. If (u, v) 2 E, where u,2 C and
v 2 C 0, then f(C) > f(C 0).
Proof: Assume the opposite, i.e., f(C) < f(C 0). Then there must be vertices x 2 C and y 2 C 0 such that x.f < y.f . Now consider the first DFS call, there are two situations: (1) y was searched before x: by property (1) there is no path from y to x, x.f > y.f .
(2) y was search after x:
since there is a path from x to y because of (u, v), x.f > y.f .
Both cases contradicts the assumption. So f(C) > f(C 0).
Chapter 22. Elementary graph algorithms
Let C be a SCC, define f(C) = maxu2C{u.f}, (with the finish times
from the first DFS call).
(2) Lemma 22.14: Let C and C 0 be distinct strongly connected
components for directed graph G. If (u, v) 2 E, where u,2 C and
v 2 C 0, then f(C) > f(C 0).
Proof: Assume the opposite, i.e., f(C) < f(C 0). Then there must be vertices x 2 C and y 2 C 0 such that x.f < y.f . Now consider the first DFS call, there are two situations: (1) y was searched before x: by property (1) there is no path from y to x, x.f > y.f .
(2) y was search after x:
since there is a path from x to y because of (u, v), x.f > y.f .
Both cases contradicts the assumption. So f(C) > f(C 0).
Chapter 22. Elementary graph algorithms
Let C be a SCC, define f(C) = maxu2C{u.f}, (with the finish times
from the first DFS call).
(2) Lemma 22.14: Let C and C 0 be distinct strongly connected
components for directed graph G. If (u, v) 2 E, where u,2 C and
v 2 C 0, then f(C) > f(C 0).
Proof: Assume the opposite, i.e., f(C) < f(C 0). Then there must be vertices x 2 C and y 2 C 0 such that x.f < y.f . Now consider the first DFS call, there are two situations: (1) y was searched before x: by property (1) there is no path from y to x, x.f > y.f .
(2) y was search after x:
since there is a path from x to y because of (u, v), x.f > y.f .
Both cases contradicts the assumption. So f(C) > f(C 0).
Chapter 22. Elementary graph algorithms
Let C be a SCC, define f(C) = maxu2C{u.f}, (with the finish times
from the first DFS call).
(2) Lemma 22.14: Let C and C 0 be distinct strongly connected
components for directed graph G. If (u, v) 2 E, where u,2 C and
v 2 C 0, then f(C) > f(C 0).
Proof: Assume the opposite, i.e., f(C) < f(C 0). Then there must be vertices x 2 C and y 2 C 0 such that x.f < y.f . Now consider the first DFS call, there are two situations: (1) y was searched before x: by property (1) there is no path from y to x, x.f > y.f .
(2) y was search after x:
since there is a path from x to y because of (u, v), x.f > y.f .
Both cases contradicts the assumption. So f(C) > f(C 0).
Chapter 22. Elementary graph algorithms
Let C be a SCC, define f(C) = maxu2C{u.f}, (with the finish times
from the first DFS call).
(2) Lemma 22.14: Let C and C 0 be distinct strongly connected
components for directed graph G. If (u, v) 2 E, where u,2 C and
v 2 C 0, then f(C) > f(C 0).
Proof: Assume the opposite, i.e., f(C) < f(C 0). Then there must be vertices x 2 C and y 2 C 0 such that x.f < y.f . Now consider the first DFS call, there are two situations: (1) y was searched before x: by property (1) there is no path from y to x, x.f > y.f .
(2) y was search after x:
since there is a path from x to y because of (u, v),
x.f > y.f .
Both cases contradicts the assumption. So f(C) > f(C 0).
Chapter 22. Elementary graph algorithms
Let C be a SCC, define f(C) = maxu2C{u.f}, (with the finish times
from the first DFS call).
(2) Lemma 22.14: Let C and C 0 be distinct strongly connected
components for directed graph G. If (u, v) 2 E, where u,2 C and
v 2 C 0, then f(C) > f(C 0).
Proof: Assume the opposite, i.e., f(C) < f(C 0). Then there must be vertices x 2 C and y 2 C 0 such that x.f < y.f . Now consider the first DFS call, there are two situations: (1) y was searched before x: by property (1) there is no path from y to x, x.f > y.f .
(2) y was search after x:
since there is a path from x to y because of (u, v), x.f > y.f .
Both cases contradicts the assumption. So f(C) > f(C 0).
Chapter 22. Elementary graph algorithms
Let C be a SCC, define f(C) = maxu2C{u.f}, (with the finish times
from the first DFS call).
(2) Lemma 22.14: Let C and C 0 be distinct strongly connected
components for directed graph G. If (u, v) 2 E, where u,2 C and
v 2 C 0, then f(C) > f(C 0).
Proof: Assume the opposite, i.e., f(C) < f(C 0). Then there must be vertices x 2 C and y 2 C 0 such that x.f < y.f . Now consider the first DFS call, there are two situations: (1) y was searched before x: by property (1) there is no path from y to x, x.f > y.f .
(2) y was search after x:
since there is a path from x to y because of (u, v), x.f > y.f .
Both cases contradicts the assumption.
So f(C) > f(C 0).
Chapter 22. Elementary graph algorithms
Let C be a SCC, define f(C) = maxu2C{u.f}, (with the finish times
from the first DFS call).
(2) Lemma 22.14: Let C and C 0 be distinct strongly connected
components for directed graph G. If (u, v) 2 E, where u,2 C and
v 2 C 0, then f(C) > f(C 0).
Proof: Assume the opposite, i.e., f(C) < f(C 0). Then there must be vertices x 2 C and y 2 C 0 such that x.f < y.f . Now consider the first DFS call, there are two situations: (1) y was searched before x: by property (1) there is no path from y to x, x.f > y.f .
(2) y was search after x:
since there is a path from x to y because of (u, v), x.f > y.f .
Both cases contradicts the assumption. So f(C) > f(C 0).
Chapter 22. Elementary graph algorithms
(3) The algorithm Strongly Connected Components(G) correctly
computes the strongly connected components for a directed graph G.
We need to prove two statements:
(1) If v u and u v in G, then u and v belong to
the same component C produced by the algorithm.
(2) If u, v 2 C, then we have v u and u v in G.
Chapter 22. Elementary graph algorithms
(3) The algorithm Strongly Connected Components(G) correctly
computes the strongly connected components for a directed graph G.
We need to prove two statements:
(1) If v u and u v in G, then u and v belong to
the same component C produced by the algorithm.
(2) If u, v 2 C, then we have v u and u v in G.
Chapter 22. Elementary graph algorithms
(3) The algorithm Strongly Connected Components(G) correctly
computes the strongly connected components for a directed graph G.
We need to prove two statements:
(1) If v u and u v in G, then u and v belong to
the same component C produced by the algorithm.
(2) If u, v 2 C, then we have v u and u v in G.
Chapter 22. Elementary graph algorithms
(3) The algorithm Strongly Connected Components(G) correctly
computes the strongly connected components for a directed graph G.
We need to prove two statements:
(1) If v u and u v in G, then u and v belong to
the same component C produced by the algorithm.
(2) If u, v 2 C, then we have v u and u v in G.
Chapter 22. Elementary graph algorithms
(3) The algorithm Strongly Connected Components(G) correctly
computes the strongly connected components for a directed graph G.
We need to prove two statements:
(1) If v u and u v in G, then u and v belong to
the same component C produced by the algorithm.
(2) If u, v 2 C, then we have v u and u v in G.
Chapter 22. Elementary graph algorithms
Proof:
(1) If v u and u v in G in G, then u and v belong to
the same component C produced by the algorithm.
Sketch of proof:
• assume in 1st DFS, v was discovered before u (or opposite);
• as v u in G, u and v belong to the same search tree rooted at r
with r.f � v.f > u.f (note: r could be just v)
• as u v in G, v u in GT ;
• now consider the 2nd DFS; there are 2 situations:
(1) searching from some w with w.f � v.f (note: w could be v)
finds v first; then it finds u;
(2) the search finds u first; because v u in G, u v is in GT ,
it finds also v.
In both situations, u and v belongs to the same search tree in
the 2nd DFS search. Therefore, u and u belong to the same component.
Chapter 22. Elementary graph algorithms
Proof:
(1) If v u and u v in G in G, then u and v belong to
the same component C produced by the algorithm.
Sketch of proof:
• assume in 1st DFS, v was discovered before u (or opposite);
• as v u in G, u and v belong to the same search tree rooted at r
with r.f � v.f > u.f (note: r could be just v)
• as u v in G, v u in GT ;
• now consider the 2nd DFS; there are 2 situations:
(1) searching from some w with w.f � v.f (note: w could be v)
finds v first; then it finds u;
(2) the search finds u first; because v u in G, u v is in GT ,
it finds also v.
In both situations, u and v belongs to the same search tree in
the 2nd DFS search. Therefore, u and u belong to the same component.
Chapter 22. Elementary graph algorithms
Proof:
(1) If v u and u v in G in G, then u and v belong to
the same component C produced by the algorithm.
Sketch of proof:
• assume in 1st DFS, v was discovered before u (or opposite);
• as v u in G, u and v belong to the same search tree rooted at r
with r.f � v.f > u.f (note: r could be just v)
• as u v in G, v u in GT ;
• now consider the 2nd DFS; there are 2 situations:
(1) searching from some w with w.f � v.f (note: w could be v)
finds v first; then it finds u;
(2) the search finds u first; because v u in G, u v is in GT ,
it finds also v.
In both situations, u and v belongs to the same search tree in
the 2nd DFS search. Therefore, u and u belong to the same component.
Chapter 22. Elementary graph algorithms
Proof:
(1) If v u and u v in G in G, then u and v belong to
the same component C produced by the algorithm.
Sketch of proof:
• assume in 1st DFS, v was discovered before u (or opposite);
• as v u in G, u and v belong to the same search tree rooted at r
with r.f � v.f > u.f (note: r could be just v)
• as u v in G, v u in GT ;
• now consider the 2nd DFS; there are 2 situations:
(1) searching from some w with w.f � v.f (note: w could be v)
finds v first; then it finds u;
(2) the search finds u first; because v u in G, u v is in GT ,
it finds also v.
In both situations, u and v belongs to the same search tree in
the 2nd DFS search. Therefore, u and u belong to the same component.
Chapter 22. Elementary graph algorithms
Proof:
(1) If v u and u v in G in G, then u and v belong to
the same component C produced by the algorithm.
Sketch of proof:
• assume in 1st DFS, v was discovered before u (or opposite);
• as v u in G, u and v belong to the same search tree rooted at r
with r.f � v.f > u.f (note: r could be just v)
• as u v in G, v u in GT ;
• now consider the 2nd DFS; there are 2 situations:
(1) searching from some w with w.f � v.f (note: w could be v)
finds v first; then it finds u;
(2) the search finds u first; because v u in G, u v is in GT ,
it finds also v.
In both situations, u and v belongs to the same search tree in
the 2nd DFS search. Therefore, u and u belong to the same component.
Chapter 22. Elementary graph algorithms
Proof:
(1) If v u and u v in G in G, then u and v belong to
the same component C produced by the algorithm.
Sketch of proof:
• assume in 1st DFS, v was discovered before u (or opposite);
• as v u in G, u and v belong to the same search tree rooted at r
with r.f � v.f > u.f (note: r could be just v)
• as u v in G, v u in GT ;
• now consider the 2nd DFS; there are 2 situations:
(1) searching from some w with w.f � v.f (note: w could be v)
finds v first; then it finds u;
(2) the search finds u first; because v u in G, u v is in GT ,
it finds also v.
In both situations, u and v belongs to the same search tree in
the 2nd DFS search. Therefore, u and u belong to the same component.
Chapter 22. Elementary graph algorithms
Proof:
(1) If v u and u v in G in G, then u and v belong to
the same component C produced by the algorithm.
Sketch of proof:
• assume in 1st DFS, v was discovered before u (or opposite);
• as v u in G, u and v belong to the same search tree rooted at r
with r.f � v.f > u.f (note: r could be just v)
• as u v in G, v u in GT ;
• now consider the 2nd DFS; there are 2 situations:
(1) searching from some w with w.f � v.f (note: w could be v)
finds v first; then it finds u;
(2) the search finds u first; because v u in G, u v is in GT ,
it finds also v.
In both situations, u and v belongs to the same search tree in
the 2nd DFS search. Therefore, u and u belong to the same component.
Chapter 22. Elementary graph algorithms
Proof:
(1) If v u and u v in G in G, then u and v belong to
the same component C produced by the algorithm.
Sketch of proof:
• assume in 1st DFS, v was discovered before u (or opposite);
• as v u in G, u and v belong to the same search tree rooted at r
with r.f � v.f > u.f (note: r could be just v)
• as u v in G, v u in GT ;
• now consider the 2nd DFS; there are 2 situations:
(1) searching from some w with w.f � v.f (note: w could be v)
finds v first; then it finds u;
(2) the search finds u first; because v u in G, u v is in GT ,
it finds also v.
In both situations, u and v belongs to the same search tree in
the 2nd DFS search. Therefore, u and u belong to the same component.
Chapter 22. Elementary graph algorithms
Proof:
(1) If v u and u v in G in G, then u and v belong to
the same component C produced by the algorithm.
Sketch of proof:
• assume in 1st DFS, v was discovered before u (or opposite);
• as v u in G, u and v belong to the same search tree rooted at r
with r.f � v.f > u.f (note: r could be just v)
• as u v in G, v u in GT ;
• now consider the 2nd DFS; there are 2 situations:
(1) searching from some w with w.f � v.f (note: w could be v)
finds v first; then it finds u;
(2) the search finds u first; because v u in G, u v is in GT ,
it finds also v.
In both situations, u and v belongs to the same search tree in
the 2nd DFS search. Therefore, u and u belong to the same component.
Chapter 22. Elementary graph algorithms
Proof:
(1) If v u and u v in G in G, then u and v belong to
the same component C produced by the algorithm.
Sketch of proof:
• assume in 1st DFS, v was discovered before u (or opposite);
• as v u in G, u and v belong to the same search tree rooted at r
with r.f � v.f > u.f (note: r could be just v)
• as u v in G, v u in GT ;
• now consider the 2nd DFS; there are 2 situations:
(1) searching from some w with w.f � v.f (note: w could be v)
finds v first; then it finds u;
(2) the search finds u first; because v u in G, u v is in GT ,
it finds also v.
In both situations, u and v belongs to the same search tree in
the 2nd DFS search. Therefore, u and u belong to the same component.
Chapter 22. Elementary graph algorithms
Proof:
(1) If v u and u v in G in G, then u and v belong to
the same component C produced by the algorithm.
Sketch of proof:
• assume in 1st DFS, v was discovered before u (or opposite);
• as v u in G, u and v belong to the same search tree rooted at r
with r.f � v.f > u.f (note: r could be just v)
• as u v in G, v u in GT ;
• now consider the 2nd DFS; there are 2 situations:
(1) searching from some w with w.f � v.f (note: w could be v)
finds v first; then it finds u;
(2) the search finds u first; because v u in G, u v is in GT ,
it finds also v.
In both situations, u and v belongs to the same search tree in
the 2nd DFS search. Therefore, u and u belong to the same component.
Chapter 22. Elementary graph algorithms
(2) If u, v 2 C, then we have v u and u v in G.
Sketch of proof:
(1) assume u, v belong to the same tree of root r in 2nd DFS;
(2) then r.f > u.f ; and r.f > v.f in 1st DFS;
(3) the assumption in (1) also implies:
• r u and r v in GT ;
• that is, u r and v r in G;
• then u.f > r.f and
v.f > r.f in 1st DFS,
which conflict with conclusions in (2), UNLESS
r u and r v in G also.
(4) This means: through r, v u and u v in G.
Chapter 22. Elementary graph algorithms
(2) If u, v 2 C, then we have v u and u v in G.
Sketch of proof:
(1) assume u, v belong to the same tree of root r in 2nd DFS;
(2) then r.f > u.f ; and r.f > v.f in 1st DFS;
(3) the assumption in (1) also implies:
• r u and r v in GT ;
• that is, u r and v r in G;
• then u.f > r.f and
v.f > r.f in 1st DFS,
which conflict with conclusions in (2), UNLESS
r u and r v in G also.
(4) This means: through r, v u and u v in G.
Chapter 22. Elementary graph algorithms
(2) If u, v 2 C, then we have v u and u v in G.
Sketch of proof:
(1) assume u, v belong to the same tree of root r in 2nd DFS;
(2) then r.f > u.f ; and r.f > v.f in 1st DFS;
(3) the assumption in (1) also implies:
• r u and r v in GT ;
• that is, u r and v r in G;
• then u.f > r.f and
v.f > r.f in 1st DFS,
which conflict with conclusions in (2), UNLESS
r u and r v in G also.
(4) This means: through r, v u and u v in G.
Chapter 22. Elementary graph algorithms
(2) If u, v 2 C, then we have v u and u v in G.
Sketch of proof:
(1) assume u, v belong to the same tree of root r in 2nd DFS;
(2) then r.f > u.f ; and r.f > v.f in 1st DFS;
(3) the assumption in (1) also implies:
• r u and r v in GT ;
• that is, u r and v r in G;
• then u.f > r.f and
v.f > r.f in 1st DFS,
which conflict with conclusions in (2), UNLESS
r u and r v in G also.
(4) This means: through r, v u and u v in G.
Chapter 22. Elementary graph algorithms
(2) If u, v 2 C, then we have v u and u v in G.
Sketch of proof:
(1) assume u, v belong to the same tree of root r in 2nd DFS;
(2) then r.f > u.f ; and r.f > v.f in 1st DFS;
(3) the assumption in (1) also implies:
• r u and r v in GT ;
• that is, u r and v r in G;
• then u.f > r.f and
v.f > r.f in 1st DFS,
which conflict with conclusions in (2), UNLESS
r u and r v in G also.
(4) This means: through r, v u and u v in G.
Chapter 22. Elementary graph algorithms
(2) If u, v 2 C, then we have v u and u v in G.
Sketch of proof:
(1) assume u, v belong to the same tree of root r in 2nd DFS;
(2) then r.f > u.f ; and r.f > v.f in 1st DFS;
(3) the assumption in (1) also implies:
• r u and r v in GT ;
• that is, u r and v r in G;
• then u.f > r.f and
v.f > r.f in 1st DFS,
which conflict with conclusions in (2), UNLESS
r u and r v in G also.
(4) This means: through r, v u and u v in G.
Chapter 22. Elementary graph algorithms
(2) If u, v 2 C, then we have v u and u v in G.
Sketch of proof:
(1) assume u, v belong to the same tree of root r in 2nd DFS;
(2) then r.f > u.f ; and r.f > v.f in 1st DFS;
(3) the assumption in (1) also implies:
• r u and r v in GT ;
• that is, u r and v r in G;
• then u.f > r.f and
v.f > r.f in 1st DFS,
which conflict with conclusions in (2), UNLESS
r u and r v in G also.
(4) This means: through r, v u and u v in G.
Chapter 22. Elementary graph algorithms
(2) If u, v 2 C, then we have v u and u v in G.
Sketch of proof:
(1) assume u, v belong to the same tree of root r in 2nd DFS;
(2) then r.f > u.f ; and r.f > v.f in 1st DFS;
(3) the assumption in (1) also implies:
• r u and r v in GT ;
• that is, u r and v r in G;
• then u.f > r.f and
v.f > r.f in 1st DFS,
which conflict with conclusions in (2), UNLESS
r u and r v in G also.
(4) This means: through r, v u and u v in G.
Chapter 22. Elementary graph algorithms
(2) If u, v 2 C, then we have v u and u v in G.
Sketch of proof:
(1) assume u, v belong to the same tree of root r in 2nd DFS;
(2) then r.f > u.f ; and r.f > v.f in 1st DFS;
(3) the assumption in (1) also implies:
• r u and r v in GT ;
• that is, u r and v r in G;
• then u.f > r.f and
v.f > r.f in 1st DFS,
which conflict with conclusions in (2), UNLESS
r u and r v in G also.
(4) This means: through r, v u and u v in G.
Chapter 22. Elementary graph algorithms
(2) If u, v 2 C, then we have v u and u v in G.
Sketch of proof:
(1) assume u, v belong to the same tree of root r in 2nd DFS;
(2) then r.f > u.f ; and r.f > v.f in 1st DFS;
(3) the assumption in (1) also implies:
• r u and r v in GT ;
• that is, u r and v r in G;
• then u.f > r.f and
v.f > r.f in 1st DFS,
which conflict with conclusions in (2), UNLESS
r u and r v in G also.
(4) This means: through r, v u and u v in G.
Chapter 22. Elementary graph algorithms
(2) If u, v 2 C, then we have v u and u v in G.
Sketch of proof:
(1) assume u, v belong to the same tree of root r in 2nd DFS;
(2) then r.f > u.f ; and r.f > v.f in 1st DFS;
(3) the assumption in (1) also implies:
• r u and r v in GT ;
• that is, u r and v r in G;
• then u.f > r.f and
v.f > r.f in 1st DFS,
which conflict with conclusions in (2),
UNLESS
r u and r v in G also.
(4) This means: through r, v u and u v in G.
Chapter 22. Elementary graph algorithms
(2) If u, v 2 C, then we have v u and u v in G.
Sketch of proof:
(1) assume u, v belong to the same tree of root r in 2nd DFS;
(2) then r.f > u.f ; and r.f > v.f in 1st DFS;
(3) the assumption in (1) also implies:
• r u and r v in GT ;
• that is, u r and v r in G;
• then u.f > r.f and
v.f > r.f in 1st DFS,
which conflict with conclusions in (2), UNLESS
r u and r v in G also.
(4) This means: through r, v u and u v in G.
Chapter 22. Elementary graph algorithms
(2) If u, v 2 C, then we have v u and u v in G.
Sketch of proof:
(1) assume u, v belong to the same tree of root r in 2nd DFS;
(2) then r.f > u.f ; and r.f > v.f in 1st DFS;
(3) the assumption in (1) also implies:
• r u and r v in GT ;
• that is, u r and v r in G;
• then u.f > r.f and
v.f > r.f in 1st DFS,
which conflict with conclusions in (2), UNLESS
r u and r v in G also.
(4) This means: through r, v u and u v in G.
Chapter 22. Elementary graph algorithms
(2) If u, v 2 C, then we have v u and u v in G.
Sketch of proof:
(1) assume u, v belong to the same tree of root r in 2nd DFS;
(2) then r.f > u.f ; and r.f > v.f in 1st DFS;
(3) the assumption in (1) also implies:
• r u and r v in GT ;
• that is, u r and v r in G;
• then u.f > r.f and
v.f > r.f in 1st DFS,
which conflict with conclusions in (2), UNLESS
r u and r v in G also.
(4) This means: through r, v u and u v in G.
Chapter 22. Elementary graph algorithms
Reachability Problem
Input: G = (V,E), and s, t 2 V , ;
Output: YES if and only there is a path s t in G.
• The problem can be solved with DFS and BFS
by search on the graph from s until t shows up.
Linear time O(|E|+ |V |). Can we do better?
• But first answer the following question:
Can you write an SQL program to solve Reachability?
• It appears that a loop is needed to solve Reachability. Why?
Inherent di�culty in parallel computation.
P-complete, it cannot be solved in time O(logn) even if ⇥(n) CPUs are used.
Chapter 22. Elementary graph algorithms
Reachability Problem
Input: G = (V,E), and s, t 2 V , ;
Output: YES if and only there is a path s t in G.
• The problem can be solved with DFS and BFS
by search on the graph from s until t shows up.
Linear time O(|E|+ |V |). Can we do better?
• But first answer the following question:
Can you write an SQL program to solve Reachability?
• It appears that a loop is needed to solve Reachability. Why?
Inherent di�culty in parallel computation.
P-complete, it cannot be solved in time O(logn) even if ⇥(n) CPUs are used.
Chapter 22. Elementary graph algorithms
Reachability Problem
Input: G = (V,E), and s, t 2 V , ;
Output: YES if and only there is a path s t in G.
• The problem can be solved with DFS and BFS
by search on the graph from s until t shows up.
Linear time O(|E|+ |V |). Can we do better?
• But first answer the following question:
Can you write an SQL program to solve Reachability?
• It appears that a loop is needed to solve Reachability. Why?
Inherent di�culty in parallel computation.
P-complete, it cannot be solved in time O(logn) even if ⇥(n) CPUs are used.
Chapter 22. Elementary graph algorithms
Reachability Problem
Input: G = (V,E), and s, t 2 V , ;
Output: YES if and only there is a path s t in G.
• The problem can be solved with DFS and BFS
by search on the graph from s until t shows up.
Linear time O(|E|+ |V |). Can we do better?
• But first answer the following question:
Can you write an SQL program to solve Reachability?
• It appears that a loop is needed to solve Reachability. Why?
Inherent di�culty in parallel computation.
P-complete, it cannot be solved in time O(logn) even if ⇥(n) CPUs are used.
Chapter 22. Elementary graph algorithms
Reachability Problem
Input: G = (V,E), and s, t 2 V , ;
Output: YES if and only there is a path s t in G.
• The problem can be solved with DFS and BFS
by search on the graph from s until t shows up.
Linear time O(|E|+ |V |). Can we do better?
• But first answer the following question:
Can you write an SQL program to solve Reachability?
• It appears that a loop is needed to solve Reachability. Why?
Inherent di�culty in parallel computation.
P-complete, it cannot be solved in time O(logn) even if ⇥(n) CPUs are used.
Chapter 22. Elementary graph algorithms
Reachability Problem
Input: G = (V,E), and s, t 2 V , ;
Output: YES if and only there is a path s t in G.
• The problem can be solved with DFS and BFS
by search on the graph from s until t shows up.
Linear time O(|E|+ |V |). Can we do better?
• But first answer the following question:
Can you write an SQL program to solve Reachability?
• It appears that a loop is needed to solve Reachability. Why?
Inherent di�culty in parallel computation.
P-complete, it cannot be solved in time O(logn) even if ⇥(n) CPUs are used.
Chapter 22. Elementary graph algorithms
Reachability Problem
Input: G = (V,E), and s, t 2 V , ;
Output: YES if and only there is a path s t in G.
• The problem can be solved with DFS and BFS
by search on the graph from s until t shows up.
Linear time O(|E|+ |V |). Can we do better?
• But first answer the following question:
Can you write an SQL program to solve Reachability?
• It appears that a loop is needed to solve Reachability. Why?
Inherent di�culty in parallel computation.
P-complete, it cannot be solved in time O(logn) even if ⇥(n) CPUs are used.
Chapter 22. Elementary graph algorithms
Reachability Problem
Input: G = (V,E), and s, t 2 V , ;
Output: YES if and only there is a path s t in G.
• The problem can be solved with DFS and BFS
by search on the graph from s until t shows up.
Linear time O(|E|+ |V |). Can we do better?
• But first answer the following question:
Can you write an SQL program to solve Reachability?
• It appears that a loop is needed to solve Reachability. Why?
Inherent di�culty in parallel computation.
P-complete, it cannot be solved in time O(logn) even if ⇥(n) CPUs are used.
Chapter 22. Elementary graph algorithms
Reachability Problem
Input: G = (V,E), and s, t 2 V , ;
Output: YES if and only there is a path s t in G.
• The problem can be solved with DFS and BFS
by search on the graph from s until t shows up.
Linear time O(|E|+ |V |). Can we do better?
• But first answer the following question:
Can you write an SQL program to solve Reachability?
• It appears that a loop is needed to solve Reachability.
Why?
Inherent di�culty in parallel computation.
P-complete, it cannot be solved in time O(logn) even if ⇥(n) CPUs are used.
Chapter 22. Elementary graph algorithms
Reachability Problem
Input: G = (V,E), and s, t 2 V , ;
Output: YES if and only there is a path s t in G.
• The problem can be solved with DFS and BFS
by search on the graph from s until t shows up.
Linear time O(|E|+ |V |). Can we do better?
• But first answer the following question:
Can you write an SQL program to solve Reachability?
• It appears that a loop is needed to solve Reachability. Why?
Inherent di�culty in parallel computation.
P-complete, it cannot be solved in time O(logn) even if ⇥(n) CPUs are used.
Chapter 22. Elementary graph algorithms
Reachability Problem
Input: G = (V,E), and s, t 2 V , ;
Output: YES if and only there is a path s t in G.
• The problem can be solved with DFS and BFS
by search on the graph from s until t shows up.
Linear time O(|E|+ |V |). Can we do better?
• But first answer the following question:
Can you write an SQL program to solve Reachability?
• It appears that a loop is needed to solve Reachability. Why?
Inherent di�culty in parallel computation.
P-complete, it cannot be solved in time O(logn) even if ⇥(n) CPUs are used.
Chapter 22. Elementary graph algorithms
Reachability Problem
Input: G = (V,E), and s, t 2 V , ;
Output: YES if and only there is a path s t in G.
• The problem can be solved with DFS and BFS
by search on the graph from s until t shows up.
Linear time O(|E|+ |V |). Can we do better?
• But first answer the following question:
Can you write an SQL program to solve Reachability?
• It appears that a loop is needed to solve Reachability. Why?
Inherent di�culty in parallel computation.
P-complete, it cannot be solved in time O(logn) even if ⇥(n) CPUs are used.
Chapter 23. Minimum Spanning Trees
Chapter 23. Minimum Spanning Trees
• A spanning tree of a graph G = (V,E) is a tree as a subgraph in G
which contains all vertices in V .
• A minimum spanning tree (MST) of an edge-weighted graph G is a
spanning tree with the least edge weight sum.
Chapter 23. Minimum Spanning Trees
Chapter 23. Minimum Spanning Trees
• A spanning tree of a graph G = (V,E) is a tree as a subgraph in G
which contains all vertices in V .
• A minimum spanning tree (MST) of an edge-weighted graph G is a
spanning tree with the least edge weight sum.
Chapter 23. Minimum Spanning Trees
Chapter 23. Minimum Spanning Trees
• A spanning tree of a graph G = (V,E) is a tree as a subgraph in G
which contains all vertices in V .
• A minimum spanning tree (MST) of an edge-weighted graph G is a
spanning tree with the least edge weight sum.
Chapter 23. Minimum Spanning Trees
Chapter 23. Minimum Spanning Trees
• A spanning tree of a graph G = (V,E) is a tree as a subgraph in G
which contains all vertices in V .
• A minimum spanning tree (MST) of an edge-weighted graph G is a
spanning tree with the least edge weight sum.
Chapter 23. Minimum Spanning Trees
Chapter 23. Minimum Spanning Trees
• A spanning tree of a graph G = (V,E) is a tree as a subgraph in G
which contains all vertices in V .
• A minimum spanning tree (MST) of an edge-weighted graph G is a
spanning tree with the least edge weight sum.
Chapter 23. Minimum Spanning Trees
Chapter 23. Minimum Spanning Trees
• A spanning tree of a graph G = (V,E) is a tree as a subgraph in G
which contains all vertices in V .
• A minimum spanning tree (MST) of an edge-weighted graph G is a
spanning tree with the least edge weight sum.
Chapter 23. Minimum Spanning Trees
Chapter 23. Minimum Spanning Trees
• A spanning tree of a graph G = (V,E) is a tree as a subgraph in G
which contains all vertices in V .
• A minimum spanning tree (MST) of an edge-weighted graph G is a
spanning tree with the least edge weight sum.
Chapter 23. Minimum Spanning Trees
The MST problem
Input: connected, undirected graph G = (V,E) with weight w : E ! R,
Output: a spanning tree T = (V,E0) such that
W (T ) =
X
(u,v)2E0
w(u, v) is the minimum
We will introduce two greedy algorithms: (1) Kruskal’s and (2) Prim’s
• They have the same generic process to grow a spanning tree;
• but di↵er in which edge to add the partially grown tree.
Chapter 23. Minimum Spanning Trees
The MST problem
Input: connected, undirected graph G = (V,E) with weight w : E ! R,
Output: a spanning tree T = (V,E0) such that
W (T ) =
X
(u,v)2E0
w(u, v) is the minimum
We will introduce two greedy algorithms: (1) Kruskal’s and (2) Prim’s
• They have the same generic process to grow a spanning tree;
• but di↵er in which edge to add the partially grown tree.
Chapter 23. Minimum Spanning Trees
The MST problem
Input: connected, undirected graph G = (V,E) with weight w : E ! R,
Output: a spanning tree T = (V,E0) such that
W (T ) =
X
(u,v)2E0
w(u, v) is the minimum
We will introduce two greedy algorithms: (1) Kruskal’s and (2) Prim’s
• They have the same generic process to grow a spanning tree;
• but di↵er in which edge to add the partially grown tree.
Chapter 23. Minimum Spanning Trees
The MST problem
Input: connected, undirected graph G = (V,E) with weight w : E ! R,
Output: a spanning tree T = (V,E0) such that
W (T ) =
X
(u,v)2E0
w(u, v) is the minimum
We will introduce two greedy algorithms: (1) Kruskal’s and (2) Prim’s
• They have the same generic process to grow a spanning tree;
• but di↵er in which edge to add the partially grown tree.
Chapter 23. Minimum Spanning Trees
The MST problem
Input: connected, undirected graph G = (V,E) with weight w : E ! R,
Output: a spanning tree T = (V,E0) such that
W (T ) =
X
(u,v)2E0
w(u, v) is the minimum
We will introduce two greedy algorithms: (1) Kruskal’s and (2) Prim’s
• They have the same generic process to grow a spanning tree;
• but di↵er in which edge to add the partially grown tree.
Chapter 23. Minimum Spanning Trees
The MST problem
Input: connected, undirected graph G = (V,E) with weight w : E ! R,
Output: a spanning tree T = (V,E0) such that
W (T ) =
X
(u,v)2E0
w(u, v) is the minimum
We will introduce two greedy algorithms: (1) Kruskal’s and (2) Prim’s
• They have the same generic process to grow a spanning tree;
• but di↵er in which edge to add the partially grown tree.
Chapter 23. Minimum Spanning Trees
Growing an MST
A generic process to grow an MST.
Generic MST(G,w) { given graph G and weight function w }
1. A = ;;
2. while A does not form a spanning tree
3. find an edge (u, v) that is safe for A
4. A = A [ {(u, v)}
5. return (A)
Loop invariant: A is always a subset of some MST;
Note: when the loop terminates, A is a MST.
safe edge:
edge (u, v) is safe for A if does not violate the loop invariant,
i.e, A [ {(u, v)} is a subset of some MST.
Chapter 23. Minimum Spanning Trees
Growing an MST
A generic process to grow an MST.
Generic MST(G,w) { given graph G and weight function w }
1. A = ;;
2. while A does not form a spanning tree
3. find an edge (u, v) that is safe for A
4. A = A [ {(u, v)}
5. return (A)
Loop invariant: A is always a subset of some MST;
Note: when the loop terminates, A is a MST.
safe edge:
edge (u, v) is safe for A if does not violate the loop invariant,
i.e, A [ {(u, v)} is a subset of some MST.
Chapter 23. Minimum Spanning Trees
Growing an MST
A generic process to grow an MST.
Generic MST(G,w) { given graph G and weight function w }
1. A = ;;
2. while A does not form a spanning tree
3. find an edge (u, v) that is safe for A
4. A = A [ {(u, v)}
5. return (A)
Loop invariant: A is always a subset of some MST;
Note: when the loop terminates, A is a MST.
safe edge:
edge (u, v) is safe for A if does not violate the loop invariant,
i.e, A [ {(u, v)} is a subset of some MST.
Chapter 23. Minimum Spanning Trees
Growing an MST
A generic process to grow an MST.
Generic MST(G,w) { given graph G and weight function w }
1. A = ;;
2. while A does not form a spanning tree
3. find an edge (u, v) that is safe for A
4. A = A [ {(u, v)}
5. return (A)
Loop invariant: A is always a subset of some MST;
Note: when the loop terminates, A is a MST.
safe edge:
edge (u, v) is safe for A if does not violate the loop invariant,
i.e, A [ {(u, v)} is a subset of some MST.
Chapter 23. Minimum Spanning Trees
Growing an MST
A generic process to grow an MST.
Generic MST(G,w) { given graph G and weight function w }
1. A = ;;
2. while A does not form a spanning tree
3. find an edge (u, v) that is safe for A
4. A = A [ {(u, v)}
5. return (A)
Loop invariant: A is always a subset of some MST;
Note: when the loop terminates, A is a MST.
safe edge:
edge (u, v) is safe for A if does not violate the loop invariant,
i.e, A [ {(u, v)} is a subset of some MST.
Chapter 23. Minimum Spanning Trees
Growing an MST
A generic process to grow an MST.
Generic MST(G,w) { given graph G and weight function w }
1. A = ;;
2. while A does not form a spanning tree
3. find an edge (u, v) that is safe for A
4. A = A [ {(u, v)}
5. return (A)
Loop invariant: A is always a subset of some MST;
Note: when the loop terminates, A is a MST.
safe edge:
edge (u, v) is safe for A if does not violate the loop invariant,
i.e, A [ {(u, v)} is a subset of some MST.
Chapter 23. Minimum Spanning Trees
Growing an MST
A generic process to grow an MST.
Generic MST(G,w) { given graph G and weight function w }
1. A = ;;
2. while A does not form a spanning tree
3. find an edge (u, v) that is safe for A
4. A = A [ {(u, v)}
5. return (A)
Loop invariant: A is always a subset of some MST;
Note: when the loop terminates, A is a MST.
safe edge:
edge (u, v) is safe for A if does not violate the loop invariant,
i.e, A [ {(u, v)} is a subset of some MST.
Chapter 23. Minimum Spanning Trees
Growing an MST
A generic process to grow an MST.
Generic MST(G,w) { given graph G and weight function w }
1. A = ;;
2. while A does not form a spanning tree
3. find an edge (u, v) that is safe for A
4. A = A [ {(u, v)}
5. return (A)
Loop invariant:
A is always a subset of some MST;
Note: when the loop terminates, A is a MST.
safe edge:
edge (u, v) is safe for A if does not violate the loop invariant,
i.e, A [ {(u, v)} is a subset of some MST.
Chapter 23. Minimum Spanning Trees
Growing an MST
A generic process to grow an MST.
Generic MST(G,w) { given graph G and weight function w }
1. A = ;;
2. while A does not form a spanning tree
3. find an edge (u, v) that is safe for A
4. A = A [ {(u, v)}
5. return (A)
Loop invariant: A is always a subset of some MST;
Note: when the loop terminates, A is a MST.
safe edge:
edge (u, v) is safe for A if does not violate the loop invariant,
i.e, A [ {(u, v)} is a subset of some MST.
Chapter 23. Minimum Spanning Trees
Growing an MST
A generic process to grow an MST.
Generic MST(G,w) { given graph G and weight function w }
1. A = ;;
2. while A does not form a spanning tree
3. find an edge (u, v) that is safe for A
4. A = A [ {(u, v)}
5. return (A)
Loop invariant: A is always a subset of some MST;
Note: when the loop terminates, A is a MST.
safe edge:
edge (u, v) is safe for A if does not violate the loop invariant,
i.e, A [ {(u, v)} is a subset of some MST.
Chapter 23. Minimum Spanning Trees
Growing an MST
A generic process to grow an MST.
Generic MST(G,w) { given graph G and weight function w }
1. A = ;;
2. while A does not form a spanning tree
3. find an edge (u, v) that is safe for A
4. A = A [ {(u, v)}
5. return (A)
Loop invariant: A is always a subset of some MST;
Note: when the loop terminates, A is a MST.
safe edge:
edge (u, v) is safe for A if does not violate the loop invariant,
i.e, A [ {(u, v)} is a subset of some MST.
Chapter 23. Minimum Spanning Trees
Growing an MST
A generic process to grow an MST.
Generic MST(G,w) { given graph G and weight function w }
1. A = ;;
2. while A does not form a spanning tree
3. find an edge (u, v) that is safe for A
4. A = A [ {(u, v)}
5. return (A)
Loop invariant: A is always a subset of some MST;
Note: when the loop terminates, A is a MST.
safe edge:
edge (u, v) is safe for A if does not violate the loop invariant,
i.e, A [ {(u, v)} is a subset of some MST.
Chapter 23. Minimum Spanning Trees
Growing an MST
A generic process to grow an MST.
Generic MST(G,w) { given graph G and weight function w }
1. A = ;;
2. while A does not form a spanning tree
3. find an edge (u, v) that is safe for A
4. A = A [ {(u, v)}
5. return (A)
Loop invariant: A is always a subset of some MST;
Note: when the loop terminates, A is a MST.
safe edge:
edge (u, v) is safe for A if does not violate the loop invariant,
i.e, A [ {(u, v)} is a subset of some MST.
Chapter 23. Minimum Spanning Trees
Growing an MST
A generic process to grow an MST.
Generic MST(G,w) { given graph G and weight function w }
1. A = ;;
2. while A does not form a spanning tree
3. find an edge (u, v) that is safe for A
4. A = A [ {(u, v)}
5. return (A)
Loop invariant: A is always a subset of some MST;
Note: when the loop terminates, A is a MST.
safe edge:
edge (u, v) is safe for A if does not violate the loop invariant,
i.e, A [ {(u, v)} is a subset of some MST.
Chapter 23. Minimum Spanning Trees
We first need some terminologies
• cut : (S, V -S), a partition of V
• crossing: (u, v) crosses cut (S, V � S)
if u and v are in S and V � S, respectively
Chapter 23. Minimum Spanning Trees
We first need some terminologies
• cut : (S, V -S), a partition of V
• crossing: (u, v) crosses cut (S, V � S)
if u and v are in S and V � S, respectively
Chapter 23. Minimum Spanning Trees
We first need some terminologies
• cut : (S, V -S), a partition of V
• crossing: (u, v) crosses cut (S, V � S)
if u and v are in S and V � S, respectively
Chapter 23. Minimum Spanning Trees
We first need some terminologies
• cut : (S, V -S), a partition of V
• crossing: (u, v) crosses cut (S, V � S)
if u and v are in S and V � S, respectively
Chapter 23. Minimum Spanning Trees
Some more terminologies
• respect: a cut respects a set A of edges if no edge in A crosses the cut.
• light edge: an edge is a light edge crossing a cut if its weight
is the minimum of any edge that crosses the cut.
Chapter 23. Minimum Spanning Trees
Some more terminologies
• respect: a cut respects a set A of edges if no edge in A crosses the cut.
• light edge: an edge is a light edge crossing a cut if its weight
is the minimum of any edge that crosses the cut.
Chapter 23. Minimum Spanning Trees
Some more terminologies
• respect: a cut respects a set A of edges if no edge in A crosses the cut.
• light edge: an edge is a light edge crossing a cut if its weight
is the minimum of any edge that crosses the cut.
Chapter 23. Minimum Spanning Trees
Theorem 23.1 Let G = (V,E).
Let A ✓ E, contained in some MST for G.
Let (S, V � S) be any cut of G that respects A.
Let (u, v) be a light edge crossing the cut.
Then edge (u, v) is safe for A.
For the theorem, we need to prove:
(1) (u, v) does not form a cycle;
(2) A, after including (u, v), is still a subset of some MST.
Sketch of proof:
(1) If A [ {(u, v)} forms a cycle there must have been another edge in A
that crosses cut (S, V � S) (WHY?), implying the cut did not respect A.
Contradicts.
(2) Assume that some MST T , A ⇢ T .
First, T [ {(u, v)} forms a circle! Why?
There must be another edge (x, y) cross the cut (S, V � S).
Since (u, v) is light edge, T 0 = T � {(x, y)} [ {(u, v)} is an MST.
Now A [ {(u, v)} ✓ T 0 because (x, y) 62 A
(otherwise, the cut would not respect A.
Chapter 23. Minimum Spanning Trees
Theorem 23.1 Let G = (V,E).
Let A ✓ E, contained in some MST for G.
Let (S, V � S) be any cut of G that respects A.
Let (u, v) be a light edge crossing the cut.
Then edge (u, v) is safe for A.
For the theorem, we need to prove:
(1) (u, v) does not form a cycle;
(2) A, after including (u, v), is still a subset of some MST.
Sketch of proof:
(1) If A [ {(u, v)} forms a cycle there must have been another edge in A
that crosses cut (S, V � S) (WHY?), implying the cut did not respect A.
Contradicts.
(2) Assume that some MST T , A ⇢ T .
First, T [ {(u, v)} forms a circle! Why?
There must be another edge (x, y) cross the cut (S, V � S).
Since (u, v) is light edge, T 0 = T � {(x, y)} [ {(u, v)} is an MST.
Now A [ {(u, v)} ✓ T 0 because (x, y) 62 A
(otherwise, the cut would not respect A.
Chapter 23. Minimum Spanning Trees
Theorem 23.1 Let G = (V,E).
Let A ✓ E, contained in some MST for G.
Let (S, V � S) be any cut of G that respects A.
Let (u, v) be a light edge crossing the cut.
Then edge (u, v) is safe for A.
For the theorem, we need to prove:
(1) (u, v) does not form a cycle;
(2) A, after including (u, v), is still a subset of some MST.
Sketch of proof:
(1) If A [ {(u, v)} forms a cycle there must have been another edge in A
that crosses cut (S, V � S) (WHY?), implying the cut did not respect A.
Contradicts.
(2) Assume that some MST T , A ⇢ T .
First, T [ {(u, v)} forms a circle! Why?
There must be another edge (x, y) cross the cut (S, V � S).
Since (u, v) is light edge, T 0 = T � {(x, y)} [ {(u, v)} is an MST.
Now A [ {(u, v)} ✓ T 0 because (x, y) 62 A
(otherwise, the cut would not respect A.
Chapter 23. Minimum Spanning Trees
Theorem 23.1 Let G = (V,E).
Let A ✓ E, contained in some MST for G.
Let (S, V � S) be any cut of G that respects A.
Let (u, v) be a light edge crossing the cut.
Then edge (u, v) is safe for A.
For the theorem, we need to prove:
(1) (u, v) does not form a cycle;
(2) A, after including (u, v), is still a subset of some MST.
Sketch of proof:
(1) If A [ {(u, v)} forms a cycle there must have been another edge in A
that crosses cut (S, V � S) (WHY?), implying the cut did not respect A.
Contradicts.
(2) Assume that some MST T , A ⇢ T .
First, T [ {(u, v)} forms a circle! Why?
There must be another edge (x, y) cross the cut (S, V � S).
Since (u, v) is light edge, T 0 = T � {(x, y)} [ {(u, v)} is an MST.
Now A [ {(u, v)} ✓ T 0 because (x, y) 62 A
(otherwise, the cut would not respect A.
Chapter 23. Minimum Spanning Trees
Theorem 23.1 Let G = (V,E).
Let A ✓ E, contained in some MST for G.
Let (S, V � S) be any cut of G that respects A.
Let (u, v) be a light edge crossing the cut.
Then edge (u, v) is safe for A.
For the theorem, we need to prove:
(1) (u, v) does not form a cycle;
(2) A, after including (u, v), is still a subset of some MST.
Sketch of proof:
(1) If A [ {(u, v)} forms a cycle there must have been another edge in A
that crosses cut (S, V � S) (WHY?), implying the cut did not respect A.
Contradicts.
(2) Assume that some MST T , A ⇢ T .
First, T [ {(u, v)} forms a circle! Why?
There must be another edge (x, y) cross the cut (S, V � S).
Since (u, v) is light edge, T 0 = T � {(x, y)} [ {(u, v)} is an MST.
Now A [ {(u, v)} ✓ T 0 because (x, y) 62 A
(otherwise, the cut would not respect A.
Chapter 23. Minimum Spanning Trees
Theorem 23.1 Let G = (V,E).
Let A ✓ E, contained in some MST for G.
Let (S, V � S) be any cut of G that respects A.
Let (u, v) be a light edge crossing the cut.
Then edge (u, v) is safe for A.
For the theorem, we need to prove:
(1) (u, v) does not form a cycle;
(2) A, after including (u, v), is still a subset of some MST.
Sketch of proof:
(1) If A [ {(u, v)} forms a cycle there must have been another edge in A
that crosses cut (S, V � S) (WHY?), implying the cut did not respect A.
Contradicts.
(2) Assume that some MST T , A ⇢ T .
First, T [ {(u, v)} forms a circle! Why?
There must be another edge (x, y) cross the cut (S, V � S).
Since (u, v) is light edge, T 0 = T � {(x, y)} [ {(u, v)} is an MST.
Now A [ {(u, v)} ✓ T 0 because (x, y) 62 A
(otherwise, the cut would not respect A.
Chapter 23. Minimum Spanning Trees
Theorem 23.1 Let G = (V,E).
Let A ✓ E, contained in some MST for G.
Let (S, V � S) be any cut of G that respects A.
Let (u, v) be a light edge crossing the cut.
Then edge (u, v) is safe for A.
For the theorem, we need to prove:
(1) (u, v) does not form a cycle;
(2) A, after including (u, v), is still a subset of some MST.
Sketch of proof:
(1) If A [ {(u, v)} forms a cycle there must have been another edge in A
that crosses cut (S, V � S) (WHY?), implying the cut did not respect A.
Contradicts.
(2) Assume that some MST T , A ⇢ T .
First, T [ {(u, v)} forms a circle! Why?
There must be another edge (x, y) cross the cut (S, V � S).
Since (u, v) is light edge, T 0 = T � {(x, y)} [ {(u, v)} is an MST.
Now A [ {(u, v)} ✓ T 0 because (x, y) 62 A
(otherwise, the cut would not respect A.
Chapter 23. Minimum Spanning Trees
Theorem 23.1 Let G = (V,E).
Let A ✓ E, contained in some MST for G.
Let (S, V � S) be any cut of G that respects A.
Let (u, v) be a light edge crossing the cut.
Then edge (u, v) is safe for A.
For the theorem, we need to prove:
(1) (u, v) does not form a cycle;
(2) A, after including (u, v), is still a subset of some MST.
Sketch of proof:
(1) If A [ {(u, v)} forms a cycle there must have been another edge in A
that crosses cut (S, V � S) (WHY?), implying the cut did not respect A.
Contradicts.
(2) Assume that some MST T , A ⇢ T .
First, T [ {(u, v)} forms a circle! Why?
There must be another edge (x, y) cross the cut (S, V � S).
Since (u, v) is light edge, T 0 = T � {(x, y)} [ {(u, v)} is an MST.
Now A [ {(u, v)} ✓ T 0 because (x, y) 62 A
(otherwise, the cut would not respect A.
Chapter 23. Minimum Spanning Trees
Theorem 23.1 Let G = (V,E).
Let A ✓ E, contained in some MST for G.
Let (S, V � S) be any cut of G that respects A.
Let (u, v) be a light edge crossing the cut.
Then edge (u, v) is safe for A.
For the theorem, we need to prove:
(1) (u, v) does not form a cycle;
(2) A, after including (u, v), is still a subset of some MST.
Sketch of proof:
(1) If A [ {(u, v)} forms a cycle there must have been another edge in A
that crosses cut (S, V � S) (WHY?), implying the cut did not respect A.
Contradicts.
(2) Assume that some MST T , A ⇢ T .
First, T [ {(u, v)} forms a circle! Why?
There must be another edge (x, y) cross the cut (S, V � S).
Since (u, v) is light edge, T 0 = T � {(x, y)} [ {(u, v)} is an MST.
Now A [ {(u, v)} ✓ T 0 because (x, y) 62 A
(otherwise, the cut would not respect A.
Chapter 23. Minimum Spanning Trees
Theorem 23.1 Let G = (V,E).
Let A ✓ E, contained in some MST for G.
Let (S, V � S) be any cut of G that respects A.
Let (u, v) be a light edge crossing the cut.
Then edge (u, v) is safe for A.
For the theorem, we need to prove:
(1) (u, v) does not form a cycle;
(2) A, after including (u, v), is still a subset of some MST.
Sketch of proof:
(1) If A [ {(u, v)} forms a cycle there must have been another edge in A
that crosses cut (S, V � S) (WHY?),
implying the cut did not respect A.
Contradicts.
(2) Assume that some MST T , A ⇢ T .
First, T [ {(u, v)} forms a circle! Why?
There must be another edge (x, y) cross the cut (S, V � S).
Since (u, v) is light edge, T 0 = T � {(x, y)} [ {(u, v)} is an MST.
Now A [ {(u, v)} ✓ T 0 because (x, y) 62 A
(otherwise, the cut would not respect A.
Chapter 23. Minimum Spanning Trees
Theorem 23.1 Let G = (V,E).
Let A ✓ E, contained in some MST for G.
Let (S, V � S) be any cut of G that respects A.
Let (u, v) be a light edge crossing the cut.
Then edge (u, v) is safe for A.
For the theorem, we need to prove:
(1) (u, v) does not form a cycle;
(2) A, after including (u, v), is still a subset of some MST.
Sketch of proof:
(1) If A [ {(u, v)} forms a cycle there must have been another edge in A
that crosses cut (S, V � S) (WHY?), implying the cut did not respect A.
Contradicts.
(2) Assume that some MST T , A ⇢ T .
First, T [ {(u, v)} forms a circle! Why?
There must be another edge (x, y) cross the cut (S, V � S).
Since (u, v) is light edge, T 0 = T � {(x, y)} [ {(u, v)} is an MST.
Now A [ {(u, v)} ✓ T 0 because (x, y) 62 A
(otherwise, the cut would not respect A.
Chapter 23. Minimum Spanning Trees
Theorem 23.1 Let G = (V,E).
Let A ✓ E, contained in some MST for G.
Let (S, V � S) be any cut of G that respects A.
Let (u, v) be a light edge crossing the cut.
Then edge (u, v) is safe for A.
For the theorem, we need to prove:
(1) (u, v) does not form a cycle;
(2) A, after including (u, v), is still a subset of some MST.
Sketch of proof:
(1) If A [ {(u, v)} forms a cycle there must have been another edge in A
that crosses cut (S, V � S) (WHY?), implying the cut did not respect A.
Contradicts.
(2) Assume that some MST T , A ⇢ T .
First, T [ {(u, v)} forms a circle! Why?
There must be another edge (x, y) cross the cut (S, V � S).
Since (u, v) is light edge, T 0 = T � {(x, y)} [ {(u, v)} is an MST.
Now A [ {(u, v)} ✓ T 0 because (x, y) 62 A
(otherwise, the cut would not respect A.
Chapter 23. Minimum Spanning Trees
Theorem 23.1 Let G = (V,E).
Let A ✓ E, contained in some MST for G.
Let (S, V � S) be any cut of G that respects A.
Let (u, v) be a light edge crossing the cut.
Then edge (u, v) is safe for A.
For the theorem, we need to prove:
(1) (u, v) does not form a cycle;
(2) A, after including (u, v), is still a subset of some MST.
Sketch of proof:
(1) If A [ {(u, v)} forms a cycle there must have been another edge in A
that crosses cut (S, V � S) (WHY?), implying the cut did not respect A.
Contradicts.
(2) Assume that some MST T , A ⇢ T .
First, T [ {(u, v)} forms a circle!
Why?
There must be another edge (x, y) cross the cut (S, V � S).
Since (u, v) is light edge, T 0 = T � {(x, y)} [ {(u, v)} is an MST.
Now A [ {(u, v)} ✓ T 0 because (x, y) 62 A
(otherwise, the cut would not respect A.
Chapter 23. Minimum Spanning Trees
Theorem 23.1 Let G = (V,E).
Let A ✓ E, contained in some MST for G.
Let (S, V � S) be any cut of G that respects A.
Let (u, v) be a light edge crossing the cut.
Then edge (u, v) is safe for A.
For the theorem, we need to prove:
(1) (u, v) does not form a cycle;
(2) A, after including (u, v), is still a subset of some MST.
Sketch of proof:
(1) If A [ {(u, v)} forms a cycle there must have been another edge in A
that crosses cut (S, V � S) (WHY?), implying the cut did not respect A.
Contradicts.
(2) Assume that some MST T , A ⇢ T .
First, T [ {(u, v)} forms a circle! Why?
There must be another edge (x, y) cross the cut (S, V � S).
Since (u, v) is light edge, T 0 = T � {(x, y)} [ {(u, v)} is an MST.
Now A [ {(u, v)} ✓ T 0 because (x, y) 62 A
(otherwise, the cut would not respect A.
Chapter 23. Minimum Spanning Trees
Theorem 23.1 Let G = (V,E).
Let A ✓ E, contained in some MST for G.
Let (S, V � S) be any cut of G that respects A.
Let (u, v) be a light edge crossing the cut.
Then edge (u, v) is safe for A.
For the theorem, we need to prove:
(1) (u, v) does not form a cycle;
(2) A, after including (u, v), is still a subset of some MST.
Sketch of proof:
(1) If A [ {(u, v)} forms a cycle there must have been another edge in A
that crosses cut (S, V � S) (WHY?), implying the cut did not respect A.
Contradicts.
(2) Assume that some MST T , A ⇢ T .
First, T [ {(u, v)} forms a circle! Why?
There must be another edge (x, y) cross the cut (S, V � S).
Since (u, v) is light edge, T 0 = T � {(x, y)} [ {(u, v)} is an MST.
Now A [ {(u, v)} ✓ T 0 because (x, y) 62 A
(otherwise, the cut would not respect A.
Chapter 23. Minimum Spanning Trees
Theorem 23.1 Let G = (V,E).
Let A ✓ E, contained in some MST for G.
Let (S, V � S) be any cut of G that respects A.
Let (u, v) be a light edge crossing the cut.
Then edge (u, v) is safe for A.
For the theorem, we need to prove:
(1) (u, v) does not form a cycle;
(2) A, after including (u, v), is still a subset of some MST.
Sketch of proof:
(1) If A [ {(u, v)} forms a cycle there must have been another edge in A
that crosses cut (S, V � S) (WHY?), implying the cut did not respect A.
Contradicts.
(2) Assume that some MST T , A ⇢ T .
First, T [ {(u, v)} forms a circle! Why?
There must be another edge (x, y) cross the cut (S, V � S).
Since (u, v) is light edge, T 0 = T � {(x, y)} [ {(u, v)} is an MST.
Now A [ {(u, v)} ✓ T 0 because (x, y) 62 A
(otherwise, the cut would not respect A.
Chapter 23. Minimum Spanning Trees
Theorem 23.1 Let G = (V,E).
Let A ✓ E, contained in some MST for G.
Let (S, V � S) be any cut of G that respects A.
Let (u, v) be a light edge crossing the cut.
Then edge (u, v) is safe for A.
For the theorem, we need to prove:
(1) (u, v) does not form a cycle;
(2) A, after including (u, v), is still a subset of some MST.
Sketch of proof:
(1) If A [ {(u, v)} forms a cycle there must have been another edge in A
that crosses cut (S, V � S) (WHY?), implying the cut did not respect A.
Contradicts.
(2) Assume that some MST T , A ⇢ T .
First, T [ {(u, v)} forms a circle! Why?
There must be another edge (x, y) cross the cut (S, V � S).
Since (u, v) is light edge, T 0 = T � {(x, y)} [ {(u, v)} is an MST.
Now A [ {(u, v)} ✓ T 0
because (x, y) 62 A
(otherwise, the cut would not respect A.
Chapter 23. Minimum Spanning Trees
Theorem 23.1 Let G = (V,E).
Let A ✓ E, contained in some MST for G.
Let (S, V � S) be any cut of G that respects A.
Let (u, v) be a light edge crossing the cut.
Then edge (u, v) is safe for A.
For the theorem, we need to prove:
(1) (u, v) does not form a cycle;
(2) A, after including (u, v), is still a subset of some MST.
Sketch of proof:
(1) If A [ {(u, v)} forms a cycle there must have been another edge in A
that crosses cut (S, V � S) (WHY?), implying the cut did not respect A.
Contradicts.
(2) Assume that some MST T , A ⇢ T .
First, T [ {(u, v)} forms a circle! Why?
There must be another edge (x, y) cross the cut (S, V � S).
Since (u, v) is light edge, T 0 = T � {(x, y)} [ {(u, v)} is an MST.
Now A [ {(u, v)} ✓ T 0 because (x, y) 62 A
(otherwise, the cut would not respect A.
Chapter 23. Minimum Spanning Trees
Theorem 23.1 gives a su�cient information for how to identify a safe
edge to make the Generic MST algorithm work.
• specific algorithms can be produced from Generic MST based on
how the set A is grown.
• A may always be a tree (Prim’s algorithm) or
could be a forest (Kruskal’s algorithm).
MST-Kruskal(G,w)
1. A = ;;
2. for each vertex v 2 G.V
3 Make-Set(v)
4. sort edges in E into non-decreasing order by their weight w
5. for each edge (u, v) 2 E, taken in the order
6. if Find Set (u) 6= Find Set(v)
7. A = A [ {(u, v)}
8. Union(u, v)
9 return (A)
Chapter 23. Minimum Spanning Trees
Theorem 23.1 gives a su�cient information for how to identify a safe
edge to make the Generic MST algorithm work.
• specific algorithms can be produced from Generic MST based on
how the set A is grown.
• A may always be a tree (Prim’s algorithm) or
could be a forest (Kruskal’s algorithm).
MST-Kruskal(G,w)
1. A = ;;
2. for each vertex v 2 G.V
3 Make-Set(v)
4. sort edges in E into non-decreasing order by their weight w
5. for each edge (u, v) 2 E, taken in the order
6. if Find Set (u) 6= Find Set(v)
7. A = A [ {(u, v)}
8. Union(u, v)
9 return (A)
Chapter 23. Minimum Spanning Trees
Theorem 23.1 gives a su�cient information for how to identify a safe
edge to make the Generic MST algorithm work.
• specific algorithms can be produced from Generic MST based on
how the set A is grown.
• A may always be a tree (Prim’s algorithm)
or
could be a forest (Kruskal’s algorithm).
MST-Kruskal(G,w)
1. A = ;;
2. for each vertex v 2 G.V
3 Make-Set(v)
4. sort edges in E into non-decreasing order by their weight w
5. for each edge (u, v) 2 E, taken in the order
6. if Find Set (u) 6= Find Set(v)
7. A = A [ {(u, v)}
8. Union(u, v)
9 return (A)
Chapter 23. Minimum Spanning Trees
Theorem 23.1 gives a su�cient information for how to identify a safe
edge to make the Generic MST algorithm work.
• specific algorithms can be produced from Generic MST based on
how the set A is grown.
• A may always be a tree (Prim’s algorithm) or
could be a forest (Kruskal’s algorithm).
MST-Kruskal(G,w)
1. A = ;;
2. for each vertex v 2 G.V
3 Make-Set(v)
4. sort edges in E into non-decreasing order by their weight w
5. for each edge (u, v) 2 E, taken in the order
6. if Find Set (u) 6= Find Set(v)
7. A = A [ {(u, v)}
8. Union(u, v)
9 return (A)
Chapter 23. Minimum Spanning Trees
Theorem 23.1 gives a su�cient information for how to identify a safe
edge to make the Generic MST algorithm work.
• specific algorithms can be produced from Generic MST based on
how the set A is grown.
• A may always be a tree (Prim’s algorithm) or
could be a forest (Kruskal’s algorithm).
MST-Kruskal(G,w)
1. A = ;;
2. for each vertex v 2 G.V
3 Make-Set(v)
4. sort edges in E into non-decreasing order by their weight w
5. for each edge (u, v) 2 E, taken in the order
6. if Find Set (u) 6= Find Set(v)
7. A = A [ {(u, v)}
8. Union(u, v)
9 return (A)
Chapter 23. Minimum Spanning Trees
Theorem 23.1 gives a su�cient information for how to identify a safe
edge to make the Generic MST algorithm work.
• specific algorithms can be produced from Generic MST based on
how the set A is grown.
• A may always be a tree (Prim’s algorithm) or
could be a forest (Kruskal’s algorithm).
MST-Kruskal(G,w)
1. A = ;;
2. for each vertex v 2 G.V
3 Make-Set(v)
4. sort edges in E into non-decreasing order by their weight w
5. for each edge (u, v) 2 E, taken in the order
6. if Find Set (u) 6= Find Set(v)
7. A = A [ {(u, v)}
8. Union(u, v)
9 return (A)
Chapter 23. Minimum Spanning Trees
Theorem 23.1 gives a su�cient information for how to identify a safe
edge to make the Generic MST algorithm work.
• specific algorithms can be produced from Generic MST based on
how the set A is grown.
• A may always be a tree (Prim’s algorithm) or
could be a forest (Kruskal’s algorithm).
MST-Kruskal(G,w)
1. A = ;;
2. for each vertex v 2 G.V
3 Make-Set(v)
4. sort edges in E into non-decreasing order by their weight w
5. for each edge (u, v) 2 E, taken in the order
6. if Find Set (u) 6= Find Set(v)
7. A = A [ {(u, v)}
8. Union(u, v)
9 return (A)
Chapter 23. Minimum Spanning Trees
Theorem 23.1 gives a su�cient information for how to identify a safe
edge to make the Generic MST algorithm work.
• specific algorithms can be produced from Generic MST based on
how the set A is grown.
• A may always be a tree (Prim’s algorithm) or
could be a forest (Kruskal’s algorithm).
MST-Kruskal(G,w)
1. A = ;;
2. for each vertex v 2 G.V
3 Make-Set(v)
4. sort edges in E into non-decreasing order by their weight w
5. for each edge (u, v) 2 E, taken in the order
6. if Find Set (u) 6= Find Set(v)
7. A = A [ {(u, v)}
8. Union(u, v)
9 return (A)
Chapter 23. Minimum Spanning Trees
Theorem 23.1 gives a su�cient information for how to identify a safe
edge to make the Generic MST algorithm work.
• specific algorithms can be produced from Generic MST based on
how the set A is grown.
• A may always be a tree (Prim’s algorithm) or
could be a forest (Kruskal’s algorithm).
MST-Kruskal(G,w)
1. A = ;;
2. for each vertex v 2 G.V
3 Make-Set(v)
4. sort edges in E into non-decreasing order by their weight w
5. for each edge (u, v) 2 E, taken in the order
6. if Find Set (u) 6= Find Set(v)
7. A = A [ {(u, v)}
8. Union(u, v)
9 return (A)
Chapter 23. Minimum Spanning Trees
Theorem 23.1 gives a su�cient information for how to identify a safe
edge to make the Generic MST algorithm work.
• specific algorithms can be produced from Generic MST based on
how the set A is grown.
• A may always be a tree (Prim’s algorithm) or
could be a forest (Kruskal’s algorithm).
MST-Kruskal(G,w)
1. A = ;;
2. for each vertex v 2 G.V
3 Make-Set(v)
4. sort edges in E into non-decreasing order by their weight w
5. for each edge (u, v) 2 E, taken in the order
6. if Find Set (u) 6= Find Set(v)
7. A = A [ {(u, v)}
8. Union(u, v)
9 return (A)
Chapter 23. Minimum Spanning Trees
Theorem 23.1 gives a su�cient information for how to identify a safe
edge to make the Generic MST algorithm work.
• specific algorithms can be produced from Generic MST based on
how the set A is grown.
• A may always be a tree (Prim’s algorithm) or
could be a forest (Kruskal’s algorithm).
MST-Kruskal(G,w)
1. A = ;;
2. for each vertex v 2 G.V
3 Make-Set(v)
4. sort edges in E into non-decreasing order by their weight w
5. for each edge (u, v) 2 E, taken in the order
6. if Find Set (u) 6= Find Set(v)
7. A = A [ {(u, v)}
8. Union(u, v)
9 return (A)
Chapter 23. Minimum Spanning Trees
Theorem 23.1 gives a su�cient information for how to identify a safe
edge to make the Generic MST algorithm work.
• specific algorithms can be produced from Generic MST based on
how the set A is grown.
• A may always be a tree (Prim’s algorithm) or
could be a forest (Kruskal’s algorithm).
MST-Kruskal(G,w)
1. A = ;;
2. for each vertex v 2 G.V
3 Make-Set(v)
4. sort edges in E into non-decreasing order by their weight w
5. for each edge (u, v) 2 E, taken in the order
6. if Find Set (u) 6= Find Set(v)
7. A = A [ {(u, v)}
8. Union(u, v)
9 return (A)
Chapter 23. Minimum Spanning Trees
Theorem 23.1 gives a su�cient information for how to identify a safe
edge to make the Generic MST algorithm work.
• specific algorithms can be produced from Generic MST based on
how the set A is grown.
• A may always be a tree (Prim’s algorithm) or
could be a forest (Kruskal’s algorithm).
MST-Kruskal(G,w)
1. A = ;;
2. for each vertex v 2 G.V
3 Make-Set(v)
4. sort edges in E into non-decreasing order by their weight w
5. for each edge (u, v) 2 E, taken in the order
6. if Find Set (u) 6= Find Set(v)
7. A = A [ {(u, v)}
8. Union(u, v)
9 return (A)
Chapter 23. Minimum Spanning Trees
Theorem 23.1 gives a su�cient information for how to identify a safe
edge to make the Generic MST algorithm work.
• specific algorithms can be produced from Generic MST based on
how the set A is grown.
• A may always be a tree (Prim’s algorithm) or
could be a forest (Kruskal’s algorithm).
MST-Kruskal(G,w)
1. A = ;;
2. for each vertex v 2 G.V
3 Make-Set(v)
4. sort edges in E into non-decreasing order by their weight w
5. for each edge (u, v) 2 E, taken in the order
6. if Find Set (u) 6= Find Set(v)
7. A = A [ {(u, v)}
8. Union(u, v)
9 return (A)
Chapter 23. Minimum Spanning Trees
Execution of Kruskal’s algorithm for MST
Chapter 23. Minimum Spanning Trees
At each iteration of the for loop, e.g., identify
• A = {(A,F ), (B,F ), (C,G), (F,G)},
cut that respects A: S = {A,B,C,D, F,G}, V -S = {E,H},
light edge (D,E) crosses the cut;
• A = {(A,F ), (B,F ), (C,G), (F,G), (D,E)},
cut that respect A: S = {A,B,C, F,G,H}, V -S = {D,E},
light edge (E,H) crosses the cut;
Chapter 23. Minimum Spanning Trees
At each iteration of the for loop, e.g., identify
• A =
{(A,F ), (B,F ), (C,G), (F,G)},
cut that respects A: S = {A,B,C,D, F,G}, V -S = {E,H},
light edge (D,E) crosses the cut;
• A = {(A,F ), (B,F ), (C,G), (F,G), (D,E)},
cut that respect A: S = {A,B,C, F,G,H}, V -S = {D,E},
light edge (E,H) crosses the cut;
Chapter 23. Minimum Spanning Trees
At each iteration of the for loop, e.g., identify
• A = {(A,F ), (B,F ), (C,G), (F,G)},
cut that respects A: S = {A,B,C,D, F,G}, V -S = {E,H},
light edge (D,E) crosses the cut;
• A = {(A,F ), (B,F ), (C,G), (F,G), (D,E)},
cut that respect A: S = {A,B,C, F,G,H}, V -S = {D,E},
light edge (E,H) crosses the cut;
Chapter 23. Minimum Spanning Trees
At each iteration of the for loop, e.g., identify
• A = {(A,F ), (B,F ), (C,G), (F,G)},
cut that respects A: S = {A,B,C,D, F,G}, V -S = {E,H},
light edge (D,E) crosses the cut;
• A = {(A,F ), (B,F ), (C,G), (F,G), (D,E)},
cut that respect A: S = {A,B,C, F,G,H}, V -S = {D,E},
light edge (E,H) crosses the cut;
Chapter 23. Minimum Spanning Trees
At each iteration of the for loop, e.g., identify
• A = {(A,F ), (B,F ), (C,G), (F,G)},
cut that respects A: S = {A,B,C,D, F,G}, V -S = {E,H},
light edge (D,E) crosses the cut;
• A = {(A,F ), (B,F ), (C,G), (F,G), (D,E)},
cut that respect A: S = {A,B,C, F,G,H}, V -S = {D,E},
light edge (E,H) crosses the cut;
Chapter 23. Minimum Spanning Trees
At each iteration of the for loop, e.g., identify
• A = {(A,F ), (B,F ), (C,G), (F,G)},
cut that respects A: S = {A,B,C,D, F,G}, V -S = {E,H},
light edge (D,E) crosses the cut;
• A =
{(A,F ), (B,F ), (C,G), (F,G), (D,E)},
cut that respect A: S = {A,B,C, F,G,H}, V -S = {D,E},
light edge (E,H) crosses the cut;
Chapter 23. Minimum Spanning Trees
At each iteration of the for loop, e.g., identify
• A = {(A,F ), (B,F ), (C,G), (F,G)},
cut that respects A: S = {A,B,C,D, F,G}, V -S = {E,H},
light edge (D,E) crosses the cut;
• A = {(A,F ), (B,F ), (C,G), (F,G), (D,E)},
cut that respect A: S = {A,B,C, F,G,H}, V -S = {D,E},
light edge (E,H) crosses the cut;
Chapter 23. Minimum Spanning Trees
At each iteration of the for loop, e.g., identify
• A = {(A,F ), (B,F ), (C,G), (F,G)},
cut that respects A: S = {A,B,C,D, F,G}, V -S = {E,H},
light edge (D,E) crosses the cut;
• A = {(A,F ), (B,F ), (C,G), (F,G), (D,E)},
cut that respect A: S = {A,B,C, F,G,H}, V -S = {D,E},
light edge (E,H) crosses the cut;
Chapter 23. Minimum Spanning Trees
At each iteration of the for loop, e.g., identify
• A = {(A,F ), (B,F ), (C,G), (F,G)},
cut that respects A: S = {A,B,C,D, F,G}, V -S = {E,H},
light edge (D,E) crosses the cut;
• A = {(A,F ), (B,F ), (C,G), (F,G), (D,E)},
cut that respect A: S = {A,B,C, F,G,H}, V -S = {D,E},
light edge (E,H) crosses the cut;
Chapter 23. Minimum Spanning Trees
The Kruskal’s algorithm uses disjoint-set data structures
(where elements are partitioned into disjoint sets)
• Make Set(x): create a set of single element x;
• Find Set(x): identify the set that contains element x;
• Union(x, y): union the two sets containing x and y into one;
Implementations (left: linked lists, Right: disjoint-set forest)
Time complexity: O(logn) for Make Set(x), Find Set(x), Union(x, y)
with disjoint-set forest implementation.
Time complexity of Kruskal’s algorithm: O(|E| log |V |+ |V |).
Chapter 23. Minimum Spanning Trees
The Kruskal’s algorithm uses disjoint-set data structures
(where elements are partitioned into disjoint sets)
• Make Set(x): create a set of single element x;
• Find Set(x): identify the set that contains element x;
• Union(x, y): union the two sets containing x and y into one;
Implementations (left: linked lists, Right: disjoint-set forest)
Time complexity: O(logn) for Make Set(x), Find Set(x), Union(x, y)
with disjoint-set forest implementation.
Time complexity of Kruskal’s algorithm: O(|E| log |V |+ |V |).
Chapter 23. Minimum Spanning Trees
The Kruskal’s algorithm uses disjoint-set data structures
(where elements are partitioned into disjoint sets)
• Make Set(x): create a set of single element x;
• Find Set(x): identify the set that contains element x;
• Union(x, y): union the two sets containing x and y into one;
Implementations (left: linked lists, Right: disjoint-set forest)
Time complexity: O(logn) for Make Set(x), Find Set(x), Union(x, y)
with disjoint-set forest implementation.
Time complexity of Kruskal’s algorithm: O(|E| log |V |+ |V |).
Chapter 23. Minimum Spanning Trees
The Kruskal’s algorithm uses disjoint-set data structures
(where elements are partitioned into disjoint sets)
• Make Set(x): create a set of single element x;
• Find Set(x): identify the set that contains element x;
• Union(x, y): union the two sets containing x and y into one;
Implementations (left: linked lists, Right: disjoint-set forest)
Time complexity: O(logn) for Make Set(x), Find Set(x), Union(x, y)
with disjoint-set forest implementation.
Time complexity of Kruskal’s algorithm: O(|E| log |V |+ |V |).
Chapter 23. Minimum Spanning Trees
The Kruskal’s algorithm uses disjoint-set data structures
(where elements are partitioned into disjoint sets)
• Make Set(x): create a set of single element x;
• Find Set(x): identify the set that contains element x;
• Union(x, y): union the two sets containing x and y into one;
Implementations (left: linked lists, Right: disjoint-set forest)
Time complexity: O(logn) for Make Set(x), Find Set(x), Union(x, y)
with disjoint-set forest implementation.
Time complexity of Kruskal’s algorithm: O(|E| log |V |+ |V |).
Chapter 23. Minimum Spanning Trees
The Kruskal’s algorithm uses disjoint-set data structures
(where elements are partitioned into disjoint sets)
• Make Set(x): create a set of single element x;
• Find Set(x): identify the set that contains element x;
• Union(x, y): union the two sets containing x and y into one;
Implementations (left: linked lists, Right: disjoint-set forest)
Time complexity:
O(logn) for Make Set(x), Find Set(x), Union(x, y)
with disjoint-set forest implementation.
Time complexity of Kruskal’s algorithm: O(|E| log |V |+ |V |).
Chapter 23. Minimum Spanning Trees
The Kruskal’s algorithm uses disjoint-set data structures
(where elements are partitioned into disjoint sets)
• Make Set(x): create a set of single element x;
• Find Set(x): identify the set that contains element x;
• Union(x, y): union the two sets containing x and y into one;
Implementations (left: linked lists, Right: disjoint-set forest)
Time complexity: O(logn) for Make Set(x), Find Set(x), Union(x, y)
with disjoint-set forest implementation.
Time complexity of Kruskal’s algorithm: O(|E| log |V |+ |V |).
Chapter 23. Minimum Spanning Trees
The Kruskal’s algorithm uses disjoint-set data structures
(where elements are partitioned into disjoint sets)
• Make Set(x): create a set of single element x;
• Find Set(x): identify the set that contains element x;
• Union(x, y): union the two sets containing x and y into one;
Implementations (left: linked lists, Right: disjoint-set forest)
Time complexity: O(logn) for Make Set(x), Find Set(x), Union(x, y)
with disjoint-set forest implementation.
Time complexity of Kruskal’s algorithm: O(|E| log |V |+ |V |).
Chapter 23. Minimum Spanning Trees
MST-Prim(G,w, r)
1. for each u 2 G.V
2. u.key =1 { u.key is the u0s shortest distance to set A = V -Q}
3. u.⇡ = NULL
4. r.key = 0 { start from vertex r }
5. Q = G.V { establish priority queue Q wit key values}
6. while Q 6= ;
7. u =Extract Min(Q)
8. for each v 2 Adj[u]
9. if v 2 Q and w(u, v) < v.key { for those not in A, update distances}
10. then v.⇡ = u
11. v.key = w(u, v)
12. return ⇡
usage of Priority queue: Q, Extract Min takes O(logn) time.
running time O(|E|+ |V | log |V |).
Chapter 23. Minimum Spanning Trees
MST-Prim(G,w, r)
1. for each u 2 G.V
2. u.key =1 { u.key is the u0s shortest distance to set A = V -Q}
3. u.⇡ = NULL
4. r.key = 0 { start from vertex r }
5. Q = G.V { establish priority queue Q wit key values}
6. while Q 6= ;
7. u =Extract Min(Q)
8. for each v 2 Adj[u]
9. if v 2 Q and w(u, v) < v.key { for those not in A, update distances}
10. then v.⇡ = u
11. v.key = w(u, v)
12. return ⇡
usage of Priority queue: Q, Extract Min takes O(logn) time.
running time O(|E|+ |V | log |V |).
Chapter 23. Minimum Spanning Trees
MST-Prim(G,w, r)
1. for each u 2 G.V
2. u.key =1 { u.key is the u0s shortest distance to set A = V -Q}
3. u.⇡ = NULL
4. r.key = 0 { start from vertex r }
5. Q = G.V { establish priority queue Q wit key values}
6. while Q 6= ;
7. u =Extract Min(Q)
8. for each v 2 Adj[u]
9. if v 2 Q and w(u, v) < v.key { for those not in A, update distances}
10. then v.⇡ = u
11. v.key = w(u, v)
12. return ⇡
usage of Priority queue: Q, Extract Min takes O(logn) time.
running time O(|E|+ |V | log |V |).
Chapter 23. Minimum Spanning Trees
MST-Prim(G,w, r)
1. for each u 2 G.V
2. u.key =1 { u.key is the u0s shortest distance to set A = V -Q}
3. u.⇡ = NULL
4. r.key = 0 { start from vertex r }
5. Q = G.V { establish priority queue Q wit key values}
6. while Q 6= ;
7. u =Extract Min(Q)
8. for each v 2 Adj[u]
9. if v 2 Q and w(u, v) < v.key { for those not in A, update distances}
10. then v.⇡ = u
11. v.key = w(u, v)
12. return ⇡
usage of Priority queue: Q, Extract Min takes O(logn) time.
running time O(|E|+ |V | log |V |).
Chapter 23. Minimum Spanning Trees
MST-Prim(G,w, r)
1. for each u 2 G.V
2. u.key =1 { u.key is the u0s shortest distance to set A = V -Q}
3. u.⇡ = NULL
4. r.key = 0 { start from vertex r }
5. Q = G.V { establish priority queue Q wit key values}
6. while Q 6= ;
7. u =Extract Min(Q)
8. for each v 2 Adj[u]
9. if v 2 Q and w(u, v) < v.key { for those not in A, update distances}
10. then v.⇡ = u
11. v.key = w(u, v)
12. return ⇡
usage of Priority queue: Q, Extract Min takes O(logn) time.
running time O(|E|+ |V | log |V |).
Chapter 23. Minimum Spanning Trees
MST-Prim(G,w, r)
1. for each u 2 G.V
2. u.key =1 { u.key is the u0s shortest distance to set A = V -Q}
3. u.⇡ = NULL
4. r.key = 0 { start from vertex r }
5. Q = G.V { establish priority queue Q wit key values}
6. while Q 6= ;
7. u =Extract Min(Q)
8. for each v 2 Adj[u]
9. if v 2 Q and w(u, v) < v.key { for those not in A, update distances}
10. then v.⇡ = u
11. v.key = w(u, v)
12. return ⇡
usage of Priority queue: Q, Extract Min takes O(logn) time.
running time O(|E|+ |V | log |V |).
Chapter 23. Minimum Spanning Trees
MST-Prim(G,w, r)
1. for each u 2 G.V
2. u.key =1 { u.key is the u0s shortest distance to set A = V -Q}
3. u.⇡ = NULL
4. r.key = 0 { start from vertex r }
5. Q = G.V { establish priority queue Q wit key values}
6. while Q 6= ;
7. u =Extract Min(Q)
8. for each v 2 Adj[u]
9. if v 2 Q and w(u, v) < v.key { for those not in A, update distances}
10. then v.⇡ = u
11. v.key = w(u, v)
12. return ⇡
usage of Priority queue: Q, Extract Min takes O(logn) time.
running time O(|E|+ |V | log |V |).
Chapter 23. Minimum Spanning Trees
MST-Prim(G,w, r)
1. for each u 2 G.V
2. u.key =1 { u.key is the u0s shortest distance to set A = V -Q}
3. u.⇡ = NULL
4. r.key = 0 { start from vertex r }
5. Q = G.V { establish priority queue Q wit key values}
6. while Q 6= ;
7. u =Extract Min(Q)
8. for each v 2 Adj[u]
9. if v 2 Q and w(u, v) < v.key { for those not in A, update distances}
10. then v.⇡ = u
11. v.key = w(u, v)
12. return ⇡
usage of Priority queue: Q, Extract Min takes O(logn) time.
running time O(|E|+ |V | log |V |).
Chapter 23. Minimum Spanning Trees
MST-Prim(G,w, r)
1. for each u 2 G.V
2. u.key =1 { u.key is the u0s shortest distance to set A = V -Q}
3. u.⇡ = NULL
4. r.key = 0 { start from vertex r }
5. Q = G.V { establish priority queue Q wit key values}
6. while Q 6= ;
7. u =Extract Min(Q)
8. for each v 2 Adj[u]
9. if v 2 Q and w(u, v) < v.key { for those not in A, update distances}
10. then v.⇡ = u
11. v.key = w(u, v)
12. return ⇡
usage of Priority queue: Q, Extract Min takes O(logn) time.
running time O(|E|+ |V | log |V |).
Chapter 23. Minimum Spanning Trees
MST-Prim(G,w, r)
1. for each u 2 G.V
2. u.key =1 { u.key is the u0s shortest distance to set A = V -Q}
3. u.⇡ = NULL
4. r.key = 0 { start from vertex r }
5. Q = G.V { establish priority queue Q wit key values}
6. while Q 6= ;
7. u =Extract Min(Q)
8. for each v 2 Adj[u]
9. if v 2 Q and w(u, v) < v.key { for those not in A, update distances}
10. then v.⇡ = u
11. v.key = w(u, v)
12. return ⇡
usage of Priority queue: Q, Extract Min takes O(logn) time.
running time O(|E|+ |V | log |V |).
Chapter 23. Minimum Spanning Trees
MST-Prim(G,w, r)
1. for each u 2 G.V
2. u.key =1 { u.key is the u0s shortest distance to set A = V -Q}
3. u.⇡ = NULL
4. r.key = 0 { start from vertex r }
5. Q = G.V { establish priority queue Q wit key values}
6. while Q 6= ;
7. u =Extract Min(Q)
8. for each v 2 Adj[u]
9. if v 2 Q and w(u, v) < v.key { for those not in A, update distances}
10. then v.⇡ = u
11. v.key = w(u, v)
12. return ⇡
usage of Priority queue: Q, Extract Min takes O(logn) time.
running time O(|E|+ |V | log |V |).
Chapter 23. Minimum Spanning Trees
MST-Prim(G,w, r)
1. for each u 2 G.V
2. u.key =1 { u.key is the u0s shortest distance to set A = V -Q}
3. u.⇡ = NULL
4. r.key = 0 { start from vertex r }
5. Q = G.V { establish priority queue Q wit key values}
6. while Q 6= ;
7. u =Extract Min(Q)
8. for each v 2 Adj[u]
9. if v 2 Q and w(u, v) < v.key { for those not in A, update distances}
10. then v.⇡ = u
11. v.key = w(u, v)
12. return ⇡
usage of Priority queue: Q, Extract Min takes O(logn) time.
running time O(|E|+ |V | log |V |).
Chapter 23. Minimum Spanning Trees
MST-Prim(G,w, r)
1. for each u 2 G.V
2. u.key =1 { u.key is the u0s shortest distance to set A = V -Q}
3. u.⇡ = NULL
4. r.key = 0 { start from vertex r }
5. Q = G.V { establish priority queue Q wit key values}
6. while Q 6= ;
7. u =Extract Min(Q)
8. for each v 2 Adj[u]
9. if v 2 Q and w(u, v) < v.key { for those not in A, update distances}
10. then v.⇡ = u
11. v.key = w(u, v)
12. return ⇡
usage of Priority queue: Q, Extract Min takes O(logn) time.
running time O(|E|+ |V | log |V |).
Chapter 23. Minimum Spanning Trees
MST-Prim(G,w, r)
1. for each u 2 G.V
2. u.key =1 { u.key is the u0s shortest distance to set A = V -Q}
3. u.⇡ = NULL
4. r.key = 0 { start from vertex r }
5. Q = G.V { establish priority queue Q wit key values}
6. while Q 6= ;
7. u =Extract Min(Q)
8. for each v 2 Adj[u]
9. if v 2 Q and w(u, v) < v.key { for those not in A, update distances}
10. then v.⇡ = u
11. v.key = w(u, v)
12. return ⇡
usage of Priority queue: Q, Extract Min takes O(logn) time.
running time O(|E|+ |V | log |V |).
Chapter 23. Minimum Spanning Trees
MST-Prim(G,w, r)
1. for each u 2 G.V
2. u.key =1 { u.key is the u0s shortest distance to set A = V -Q}
3. u.⇡ = NULL
4. r.key = 0 { start from vertex r }
5. Q = G.V { establish priority queue Q wit key values}
6. while Q 6= ;
7. u =Extract Min(Q)
8. for each v 2 Adj[u]
9. if v 2 Q and w(u, v) < v.key { for those not in A, update distances}
10. then v.⇡ = u
11. v.key = w(u, v)
12. return ⇡
usage of Priority queue: Q, Extract Min takes O(logn) time.
running time O(|E|+ |V | log |V |).
Chapter 23. Minimum Spanning Trees
Chapter 23. Minimum Spanning Trees
Summary of Kruskal’s and Prim’s algorithms:
• initialize parent array
initialize A = ; or initial vertex r;
• repeatedly choosing from the remaining edges;
pick a light edge that respects a cut
add it to A ,
ensure that A is a subset of some MST
• until A forms a spanning tree.
Chapter 23. Minimum Spanning Trees
Summary of Kruskal’s and Prim’s algorithms:
• initialize parent array
initialize A = ; or initial vertex r;
• repeatedly choosing from the remaining edges;
pick a light edge that respects a cut
add it to A ,
ensure that A is a subset of some MST
• until A forms a spanning tree.
Chapter 23. Minimum Spanning Trees
Summary of Kruskal’s and Prim’s algorithms:
• initialize parent array
initialize A = ; or initial vertex r;
• repeatedly choosing from the remaining edges;
pick a light edge that respects a cut
add it to A ,
ensure that A is a subset of some MST
• until A forms a spanning tree.
Chapter 23. Minimum Spanning Trees
Summary of Kruskal’s and Prim’s algorithms:
• initialize parent array
initialize A = ; or initial vertex r;
• repeatedly choosing from the remaining edges;
pick a light edge that respects a cut
add it to A ,
ensure that A is a subset of some MST
• until A forms a spanning tree.
Chapter 23. Minimum Spanning Trees
Summary of Kruskal’s and Prim’s algorithms:
• initialize parent array
initialize A = ; or initial vertex r;
• repeatedly choosing from the remaining edges;
pick a light edge that respects a cut
add it to A ,
ensure that A is a subset of some MST
• until A forms a spanning tree.
Chapter 23. Minimum Spanning Trees
Summary of Kruskal’s and Prim’s algorithms:
• initialize parent array
initialize A = ; or initial vertex r;
• repeatedly choosing from the remaining edges;
pick a light edge that respects a cut
add it to A
,
ensure that A is a subset of some MST
• until A forms a spanning tree.
Chapter 23. Minimum Spanning Trees
Summary of Kruskal’s and Prim’s algorithms:
• initialize parent array
initialize A = ; or initial vertex r;
• repeatedly choosing from the remaining edges;
pick a light edge that respects a cut
add it to A ,
ensure that A is a subset of some MST
• until A forms a spanning tree.
Chapter 23. Minimum Spanning Trees
Summary of Kruskal’s and Prim’s algorithms:
• initialize parent array
initialize A = ; or initial vertex r;
• repeatedly choosing from the remaining edges;
pick a light edge that respects a cut
add it to A ,
ensure that A is a subset of some MST
• until A forms a spanning tree.
Chapter 23. Minimum Spanning Trees
Summary of Kruskal’s and Prim’s algorithms:
• initialize parent array
initialize A = ; or initial vertex r;
• repeatedly choosing from the remaining edges;
pick a light edge that respects a cut
add it to A ,
ensure that A is a subset of some MST
• until A forms a spanning tree.
Chapter 23. Minimum Spanning Trees
Some questions about MST
• What are the ”cuts” implied in Kruskal’s algorithm and
in Prim’s algorithm, respectively?
• Can we develop a DP algorithm for the MST problem?
the main issue: how solutions to subproblems help build
solution for the problem
what are subproblems, or what do subsolutions look like?
Chapter 23. Minimum Spanning Trees
Some questions about MST
• What are the ”cuts” implied in Kruskal’s algorithm and
in Prim’s algorithm, respectively?
• Can we develop a DP algorithm for the MST problem?
the main issue: how solutions to subproblems help build
solution for the problem
what are subproblems, or what do subsolutions look like?
Chapter 23. Minimum Spanning Trees
Some questions about MST
• What are the ”cuts” implied in Kruskal’s algorithm and
in Prim’s algorithm, respectively?
• Can we develop a DP algorithm for the MST problem?
the main issue: how solutions to subproblems help build
solution for the problem
what are subproblems, or what do subsolutions look like?
Chapter 23. Minimum Spanning Trees
Some questions about MST
• What are the ”cuts” implied in Kruskal’s algorithm and
in Prim’s algorithm, respectively?
• Can we develop a DP algorithm for the MST problem?
the main issue: how solutions to subproblems help build
solution for the problem
what are subproblems, or what do subsolutions look like?
Chapter 23. Minimum Spanning Trees
Some questions about MST
• What are the ”cuts” implied in Kruskal’s algorithm and
in Prim’s algorithm, respectively?
• Can we develop a DP algorithm for the MST problem?
the main issue: how solutions to subproblems help build
solution for the problem
what are subproblems, or what do subsolutions look like?
Chapter 24. Single Source Shortest Paths
Chapter 24. Single-source shortest paths
Given a graph G = (V,E), with weight w : E ! R;
and single vertex s 2 V ;
for each vertex v 2 V , find a shortest path s v.
• Shortest path is a simple path.
• “distance” is measured by the total edge weight on the path
i.e., if the path v0
p vk is p = (v0, v1, . . . , vk)
then the path weight is w(p) =
kP
i=1
w(vi�1, vi)
• shortest distance between u and v is
�(u, v) = min
u
p v
{w(p)}
Chapter 24. Single Source Shortest Paths
Chapter 24. Single-source shortest paths
Given a graph G = (V,E), with weight w : E ! R;
and single vertex s 2 V ;
for each vertex v 2 V , find a shortest path s v.
• Shortest path is a simple path.
• “distance” is measured by the total edge weight on the path
i.e., if the path v0
p vk is p = (v0, v1, . . . , vk)
then the path weight is w(p) =
kP
i=1
w(vi�1, vi)
• shortest distance between u and v is
�(u, v) = min
u
p v
{w(p)}
Chapter 24. Single Source Shortest Paths
Chapter 24. Single-source shortest paths
Given a graph G = (V,E), with weight w : E ! R;
and single vertex s 2 V ;
for each vertex v 2 V , find a shortest path s v.
• Shortest path is a simple path.
• “distance” is measured by the total edge weight on the path
i.e., if the path v0
p vk is p = (v0, v1, . . . , vk)
then the path weight is w(p) =
kP
i=1
w(vi�1, vi)
• shortest distance between u and v is
�(u, v) = min
u
p v
{w(p)}
Chapter 24. Single Source Shortest Paths
Chapter 24. Single-source shortest paths
Given a graph G = (V,E), with weight w : E ! R;
and single vertex s 2 V ;
for each vertex v 2 V , find a shortest path s v.
• Shortest path is a simple path.
• “distance” is measured by the total edge weight on the path
i.e., if the path v0
p vk is p = (v0, v1, . . . , vk)
then the path weight is w(p) =
kP
i=1
w(vi�1, vi)
• shortest distance between u and v is
�(u, v) = min
u
p v
{w(p)}
Chapter 24. Single Source Shortest Paths
Chapter 24. Single-source shortest paths
Given a graph G = (V,E), with weight w : E ! R;
and single vertex s 2 V ;
for each vertex v 2 V , find a shortest path s v.
• Shortest path is a simple path.
• “distance” is measured by the total edge weight on the path
i.e., if the path v0
p vk is p = (v0, v1, . . . , vk)
then the path weight is w(p) =
kP
i=1
w(vi�1, vi)
• shortest distance between u and v is
�(u, v) = min
u
p v
{w(p)}
Chapter 24. Single Source Shortest Paths
Chapter 24. Single-source shortest paths
Given a graph G = (V,E), with weight w : E ! R;
and single vertex s 2 V ;
for each vertex v 2 V , find a shortest path s v.
• Shortest path is a simple path.
• “distance” is measured by the total edge weight on the path
i.e., if the path v0
p vk is p = (v0, v1, . . . , vk)
then the path weight is w(p) =
kP
i=1
w(vi�1, vi)
• shortest distance between u and v is
�(u, v) = min
u
p v
{w(p)}
Chapter 24. Single Source Shortest Paths
Chapter 24. Single-source shortest paths
Given a graph G = (V,E), with weight w : E ! R;
and single vertex s 2 V ;
for each vertex v 2 V , find a shortest path s v.
• Shortest path is a simple path.
• “distance” is measured by the total edge weight on the path
i.e., if the path v0
p vk is p = (v0, v1, . . . , vk)
then the path weight is w(p) =
kP
i=1
w(vi�1, vi)
• shortest distance between u and v is
�(u, v) = min
u
p v
{w(p)}
Chapter 24. Single Source Shortest Paths
• Single-source shortest paths: from s to each vertex v 2 V
• a special case: Single-pair shortest path: from s to t
• All-pairs shoretst paths: from s to t for all pairs s, t 2 V .
Chapter 24. Single Source Shortest Paths
• Single-source shortest paths: from s to each vertex v 2 V
• a special case: Single-pair shortest path: from s to t
• All-pairs shoretst paths: from s to t for all pairs s, t 2 V .
Chapter 24. Single Source Shortest Paths
• Single-source shortest paths: from s to each vertex v 2 V
• a special case: Single-pair shortest path: from s to t
• All-pairs shoretst paths: from s to t for all pairs s, t 2 V .
Chapter 24. Single Source Shortest Paths
Lemma 24.1 (a subpath of a shortest path is a shortest path)
Given a weighted directed graph G = (V,E) with edge weight function
w. Let p = (v0, v1, . . . , vk) be a shortest path v0
p vk. Then
pi,j = (vi, . . . , vj) is a shortest path vi
pi,j vj .
Proof idea: (proof by contradiction) Assume that pi,j is not the shortest path from vi
to vj . Then there is a shorter path qi,j from vi to vj .
Define path
q = (v0, . . . , vi, qi,j , vj , . . . , vk)
has weight
w(q) =
iX
t=1
w(vt�1, vt) + w(qi,j) +
k�1X
t=j
w(vt, vt+1)
<
iX
t=1
w(vt�1, vt) + w(pi,j) +
k�1X
t=j
w(vt, vt+1) = w(p)
contradicts to the assumption that p is the shortest path from v0 to vk.
Chapter 24. Single Source Shortest Paths
Lemma 24.1 (a subpath of a shortest path is a shortest path)
Given a weighted directed graph G = (V,E) with edge weight function
w. Let p = (v0, v1, . . . , vk) be a shortest path v0
p vk. Then
pi,j = (vi, . . . , vj) is a shortest path vi
pi,j vj .
Proof idea: (proof by contradiction) Assume that pi,j is not the shortest path from vi
to vj . Then there is a shorter path qi,j from vi to vj .
Define path
q = (v0, . . . , vi, qi,j , vj , . . . , vk)
has weight
w(q) =
iX
t=1
w(vt�1, vt) + w(qi,j) +
k�1X
t=j
w(vt, vt+1)
<
iX
t=1
w(vt�1, vt) + w(pi,j) +
k�1X
t=j
w(vt, vt+1) = w(p)
contradicts to the assumption that p is the shortest path from v0 to vk.
Chapter 24. Single Source Shortest Paths
Lemma 24.1 (a subpath of a shortest path is a shortest path)
Given a weighted directed graph G = (V,E) with edge weight function
w. Let p = (v0, v1, . . . , vk) be a shortest path v0
p vk. Then
pi,j = (vi, . . . , vj) is a shortest path vi
pi,j vj .
Proof idea: (proof by contradiction)
Assume that pi,j is not the shortest path from vi
to vj . Then there is a shorter path qi,j from vi to vj .
Define path
q = (v0, . . . , vi, qi,j , vj , . . . , vk)
has weight
w(q) =
iX
t=1
w(vt�1, vt) + w(qi,j) +
k�1X
t=j
w(vt, vt+1)
<
iX
t=1
w(vt�1, vt) + w(pi,j) +
k�1X
t=j
w(vt, vt+1) = w(p)
contradicts to the assumption that p is the shortest path from v0 to vk.
Chapter 24. Single Source Shortest Paths
Lemma 24.1 (a subpath of a shortest path is a shortest path)
Given a weighted directed graph G = (V,E) with edge weight function
w. Let p = (v0, v1, . . . , vk) be a shortest path v0
p vk. Then
pi,j = (vi, . . . , vj) is a shortest path vi
pi,j vj .
Proof idea: (proof by contradiction) Assume that pi,j is not the shortest path from vi
to vj . Then there is a shorter path qi,j from vi to vj .
Define path
q = (v0, . . . , vi, qi,j , vj , . . . , vk)
has weight
w(q) =
iX
t=1
w(vt�1, vt) + w(qi,j) +
k�1X
t=j
w(vt, vt+1)
<
iX
t=1
w(vt�1, vt) + w(pi,j) +
k�1X
t=j
w(vt, vt+1) = w(p)
contradicts to the assumption that p is the shortest path from v0 to vk.
Chapter 24. Single Source Shortest Paths
Lemma 24.1 (a subpath of a shortest path is a shortest path)
Given a weighted directed graph G = (V,E) with edge weight function
w. Let p = (v0, v1, . . . , vk) be a shortest path v0
p vk. Then
pi,j = (vi, . . . , vj) is a shortest path vi
pi,j vj .
Proof idea: (proof by contradiction) Assume that pi,j is not the shortest path from vi
to vj . Then there is a shorter path qi,j from vi to vj .
Define path
q = (v0, . . . , vi, qi,j , vj , . . . , vk)
has weight
w(q) =
iX
t=1
w(vt�1, vt) + w(qi,j) +
k�1X
t=j
w(vt, vt+1)
<
iX
t=1
w(vt�1, vt) + w(pi,j) +
k�1X
t=j
w(vt, vt+1) = w(p)
contradicts to the assumption that p is the shortest path from v0 to vk.
Chapter 24. Single Source Shortest Paths
Lemma 24.1 (a subpath of a shortest path is a shortest path)
Given a weighted directed graph G = (V,E) with edge weight function
w. Let p = (v0, v1, . . . , vk) be a shortest path v0
p vk. Then
pi,j = (vi, . . . , vj) is a shortest path vi
pi,j vj .
Proof idea: (proof by contradiction) Assume that pi,j is not the shortest path from vi
to vj . Then there is a shorter path qi,j from vi to vj .
Define path
q = (v0, . . . , vi, qi,j , vj , . . . , vk)
has weight
w(q) =
iX
t=1
w(vt�1, vt)
+ w(qi,j) +
k�1X
t=j
w(vt, vt+1)
<
iX
t=1
w(vt�1, vt) + w(pi,j) +
k�1X
t=j
w(vt, vt+1) = w(p)
contradicts to the assumption that p is the shortest path from v0 to vk.
Chapter 24. Single Source Shortest Paths
Lemma 24.1 (a subpath of a shortest path is a shortest path)
Given a weighted directed graph G = (V,E) with edge weight function
w. Let p = (v0, v1, . . . , vk) be a shortest path v0
p vk. Then
pi,j = (vi, . . . , vj) is a shortest path vi
pi,j vj .
Proof idea: (proof by contradiction) Assume that pi,j is not the shortest path from vi
to vj . Then there is a shorter path qi,j from vi to vj .
Define path
q = (v0, . . . , vi, qi,j , vj , . . . , vk)
has weight
w(q) =
iX
t=1
w(vt�1, vt) + w(qi,j) +
k�1X
t=j
w(vt, vt+1)
<
iX
t=1
w(vt�1, vt) + w(pi,j) +
k�1X
t=j
w(vt, vt+1) = w(p)
contradicts to the assumption that p is the shortest path from v0 to vk.
Chapter 24. Single Source Shortest Paths
Lemma 24.1 (a subpath of a shortest path is a shortest path)
Given a weighted directed graph G = (V,E) with edge weight function
w. Let p = (v0, v1, . . . , vk) be a shortest path v0
p vk. Then
pi,j = (vi, . . . , vj) is a shortest path vi
pi,j vj .
Proof idea: (proof by contradiction) Assume that pi,j is not the shortest path from vi
to vj . Then there is a shorter path qi,j from vi to vj .
Define path
q = (v0, . . . , vi, qi,j , vj , . . . , vk)
has weight
w(q) =
iX
t=1
w(vt�1, vt) + w(qi,j) +
k�1X
t=j
w(vt, vt+1)
<
iX
t=1
w(vt�1, vt) + w(pi,j) +
k�1X
t=j
w(vt, vt+1) = w(p)
contradicts to the assumption that p is the shortest path from v0 to vk.
Chapter 24. Single Source Shortest Paths
Lemma 24.1 (a subpath of a shortest path is a shortest path)
Given a weighted directed graph G = (V,E) with edge weight function
w. Let p = (v0, v1, . . . , vk) be a shortest path v0
p vk. Then
pi,j = (vi, . . . , vj) is a shortest path vi
pi,j vj .
Proof idea: (proof by contradiction) Assume that pi,j is not the shortest path from vi
to vj . Then there is a shorter path qi,j from vi to vj .
Define path
q = (v0, . . . , vi, qi,j , vj , . . . , vk)
has weight
w(q) =
iX
t=1
w(vt�1, vt) + w(qi,j) +
k�1X
t=j
w(vt, vt+1)
<
iX
t=1
w(vt�1, vt) + w(pi,j) +
k�1X
t=j
w(vt, vt+1) = w(p)
contradicts to the assumption that p is the shortest path from v0 to vk.
Chapter 24. Single Source Shortest Paths
Lemma 24.1 (a subpath of a shortest path is a shortest path)
Given a weighted directed graph G = (V,E) with edge weight function
w. Let p = (v0, v1, . . . , vk) be a shortest path v0
p vk. Then
pi,j = (vi, . . . , vj) is a shortest path vi
pi,j vj .
Proof idea: (proof by contradiction) Assume that pi,j is not the shortest path from vi
to vj . Then there is a shorter path qi,j from vi to vj .
Define path
q = (v0, . . . , vi, qi,j , vj , . . . , vk)
has weight
w(q) =
iX
t=1
w(vt�1, vt) + w(qi,j) +
k�1X
t=j
w(vt, vt+1)
<
iX
t=1
w(vt�1, vt)
+ w(pi,j) +
k�1X
t=j
w(vt, vt+1) = w(p)
contradicts to the assumption that p is the shortest path from v0 to vk.
Chapter 24. Single Source Shortest Paths
Lemma 24.1 (a subpath of a shortest path is a shortest path)
Given a weighted directed graph G = (V,E) with edge weight function
w. Let p = (v0, v1, . . . , vk) be a shortest path v0
p vk. Then
pi,j = (vi, . . . , vj) is a shortest path vi
pi,j vj .
Proof idea: (proof by contradiction) Assume that pi,j is not the shortest path from vi
to vj . Then there is a shorter path qi,j from vi to vj .
Define path
q = (v0, . . . , vi, qi,j , vj , . . . , vk)
has weight
w(q) =
iX
t=1
w(vt�1, vt) + w(qi,j) +
k�1X
t=j
w(vt, vt+1)
<
iX
t=1
w(vt�1, vt) + w(pi,j) +
k�1X
t=j
w(vt, vt+1) = w(p)
contradicts to the assumption that p is the shortest path from v0 to vk.
Chapter 24. Single Source Shortest Paths
Lemma 24.1 (a subpath of a shortest path is a shortest path)
Given a weighted directed graph G = (V,E) with edge weight function
w. Let p = (v0, v1, . . . , vk) be a shortest path v0
p vk. Then
pi,j = (vi, . . . , vj) is a shortest path vi
pi,j vj .
Proof idea: (proof by contradiction) Assume that pi,j is not the shortest path from vi
to vj . Then there is a shorter path qi,j from vi to vj .
Define path
q = (v0, . . . , vi, qi,j , vj , . . . , vk)
has weight
w(q) =
iX
t=1
w(vt�1, vt) + w(qi,j) +
k�1X
t=j
w(vt, vt+1)
<
iX
t=1
w(vt�1, vt) + w(pi,j) +
k�1X
t=j
w(vt, vt+1)
= w(p)
contradicts to the assumption that p is the shortest path from v0 to vk.
Chapter 24. Single Source Shortest Paths
Lemma 24.1 (a subpath of a shortest path is a shortest path)
Given a weighted directed graph G = (V,E) with edge weight function
w. Let p = (v0, v1, . . . , vk) be a shortest path v0
p vk. Then
pi,j = (vi, . . . , vj) is a shortest path vi
pi,j vj .
Proof idea: (proof by contradiction) Assume that pi,j is not the shortest path from vi
to vj . Then there is a shorter path qi,j from vi to vj .
Define path
q = (v0, . . . , vi, qi,j , vj , . . . , vk)
has weight
w(q) =
iX
t=1
w(vt�1, vt) + w(qi,j) +
k�1X
t=j
w(vt, vt+1)
<
iX
t=1
w(vt�1, vt) + w(pi,j) +
k�1X
t=j
w(vt, vt+1) = w(p)
contradicts to the assumption that p is the shortest path from v0 to vk.
Chapter 24. Single Source Shortest Paths
Lemma 24.1 (a subpath of a shortest path is a shortest path)
Given a weighted directed graph G = (V,E) with edge weight function
w. Let p = (v0, v1, . . . , vk) be a shortest path v0
p vk. Then
pi,j = (vi, . . . , vj) is a shortest path vi
pi,j vj .
Proof idea: (proof by contradiction) Assume that pi,j is not the shortest path from vi
to vj . Then there is a shorter path qi,j from vi to vj .
Define path
q = (v0, . . . , vi, qi,j , vj , . . . , vk)
has weight
w(q) =
iX
t=1
w(vt�1, vt) + w(qi,j) +
k�1X
t=j
w(vt, vt+1)
<
iX
t=1
w(vt�1, vt) + w(pi,j) +
k�1X
t=j
w(vt, vt+1) = w(p)
contradicts to the assumption that p is the shortest path from v0 to vk.
Chapter 24. Single Source Shortest Paths
Some terminologies:
• negative weights are allowed;
• cycles on a path: not a simple path;
• negative weight cycles, 0 weight cycles
• representing shortest paths: predecessor ⇡
shortest path tree:
http://graphserver.sourceforge.net/gallery.html
(width / 1/distance)
Chapter 24. Single Source Shortest Paths
Some terminologies:
• negative weights are allowed;
• cycles on a path: not a simple path;
• negative weight cycles, 0 weight cycles
• representing shortest paths: predecessor ⇡
shortest path tree:
http://graphserver.sourceforge.net/gallery.html
(width / 1/distance)
Chapter 24. Single Source Shortest Paths
Some terminologies:
• negative weights are allowed;
• cycles on a path: not a simple path;
• negative weight cycles, 0 weight cycles
• representing shortest paths: predecessor ⇡
shortest path tree:
http://graphserver.sourceforge.net/gallery.html
(width / 1/distance)
Chapter 24. Single Source Shortest Paths
Some terminologies:
• negative weights are allowed;
• cycles on a path: not a simple path;
• negative weight cycles, 0 weight cycles
• representing shortest paths: predecessor ⇡
shortest path tree:
http://graphserver.sourceforge.net/gallery.html
(width / 1/distance)
Chapter 24. Single Source Shortest Paths
Some terminologies:
• negative weights are allowed;
• cycles on a path: not a simple path;
• negative weight cycles, 0 weight cycles
• representing shortest paths: predecessor ⇡
shortest path tree:
http://graphserver.sourceforge.net/gallery.html
(width / 1/distance)
Chapter 24. Single Source Shortest Paths
Some terminologies:
• negative weights are allowed;
• cycles on a path: not a simple path;
• negative weight cycles, 0 weight cycles
• representing shortest paths: predecessor ⇡
shortest path tree:
http://graphserver.sourceforge.net/gallery.html
(width / 1/distance)
Chapter 24. Single Source Shortest Paths
Some terminologies:
• negative weights are allowed;
• cycles on a path: not a simple path;
• negative weight cycles, 0 weight cycles
• representing shortest paths: predecessor ⇡
shortest path tree:
http://graphserver.sourceforge.net/gallery.html
(width / 1/distance)
Chapter 24. Single Source Shortest Paths
Technique: relaxation
• Intuition:
if s
p v has distance v.d (computed so far),
s
q u is newly discovered. Then
v.d = min{v.d, u.d+ w(u, v)}
• In other words:
Let v.d be an weight upper bound of a shortest path from s to v,
initialized 1.
The process of relaxing edge (u, v): improves v.d by taking the path
through u, and update v.d and v.⇡.
Chapter 24. Single Source Shortest Paths
Technique: relaxation
• Intuition:
if s
p v has distance v.d (computed so far),
s
q u is newly discovered.
Then
v.d = min{v.d, u.d+ w(u, v)}
• In other words:
Let v.d be an weight upper bound of a shortest path from s to v,
initialized 1.
The process of relaxing edge (u, v): improves v.d by taking the path
through u, and update v.d and v.⇡.
Chapter 24. Single Source Shortest Paths
Technique: relaxation
• Intuition:
if s
p v has distance v.d (computed so far),
s
q u is newly discovered. Then
v.d = min{v.d, u.d+ w(u, v)}
• In other words:
Let v.d be an weight upper bound of a shortest path from s to v,
initialized 1.
The process of relaxing edge (u, v): improves v.d by taking the path
through u, and update v.d and v.⇡.
Chapter 24. Single Source Shortest Paths
Technique: relaxation
• Intuition:
if s
p v has distance v.d (computed so far),
s
q u is newly discovered. Then
v.d = min{v.d, u.d+ w(u, v)}
• In other words:
Let v.d be an weight upper bound of a shortest path from s to v,
initialized 1.
The process of relaxing edge (u, v): improves v.d by taking the path
through u, and update v.d and v.⇡.
Chapter 24. Single Source Shortest Paths
Technique: relaxation
• Intuition:
if s
p v has distance v.d (computed so far),
s
q u is newly discovered. Then
v.d = min{v.d, u.d+ w(u, v)}
• In other words:
Let v.d be an weight upper bound of a shortest path from s to v,
initialized 1.
The process of relaxing edge (u, v): improves v.d by taking the path
through u, and update v.d and v.⇡.
Chapter 24. Single Source Shortest Paths
Technique: relaxation
• Intuition:
if s
p v has distance v.d (computed so far),
s
q u is newly discovered. Then
v.d = min{v.d, u.d+ w(u, v)}
• In other words:
Let v.d be an weight upper bound of a shortest path from s to v,
initialized 1.
The process of relaxing edge (u, v): improves v.d by taking the path
through u, and update v.d and v.⇡.
Chapter 24. Single Source Shortest Paths
Technique: relaxation
• Intuition:
if s
p v has distance v.d (computed so far),
s
q u is newly discovered. Then
v.d = min{v.d, u.d+ w(u, v)}
• In other words:
Let v.d be an weight upper bound of a shortest path from s to v,
initialized 1.
The process of relaxing edge (u, v): improves v.d by taking the path
through u, and update v.d and v.⇡.
Chapter 24. Single Source Shortest Paths
Technique: relaxation
• Intuition:
if s
p v has distance v.d (computed so far),
s
q u is newly discovered. Then
v.d = min{v.d, u.d+ w(u, v)}
• In other words:
Let v.d be an weight upper bound of a shortest path from s to v,
initialized 1.
The process of relaxing edge (u, v): improves v.d by taking the path
through u, and update v.d and v.⇡.
Chapter 24. Single Source Shortest Paths
Bellman-Ford algorithm
Bellman-Ford(G,w, s)
1. for each vertex v 2 G.V initialization
2. v.d = 1
3. v.⇡ = NULL
4. s.d = 0
5. for i = 1 to |V |� 1 relaxation
6. for each edge (u, v) 2 G.E
7. if v.d > u.d+ w(u, v)
8. v.d = u.d+ w(u, v)
9. v.⇡ = u
10. for each edge (u, v) 2 G.E checking negative weight cycle
11. if v.d > u.d+ w(u, v)
12. return (FALSE)
13. return (TRUE)
Running time : O(|V ||E|)
Chapter 24. Single Source Shortest Paths
Bellman-Ford algorithm
Bellman-Ford(G,w, s)
1. for each vertex v 2 G.V initialization
2. v.d = 1
3. v.⇡ = NULL
4. s.d = 0
5. for i = 1 to |V |� 1 relaxation
6. for each edge (u, v) 2 G.E
7. if v.d > u.d+ w(u, v)
8. v.d = u.d+ w(u, v)
9. v.⇡ = u
10. for each edge (u, v) 2 G.E checking negative weight cycle
11. if v.d > u.d+ w(u, v)
12. return (FALSE)
13. return (TRUE)
Running time : O(|V ||E|)
Chapter 24. Single Source Shortest Paths
Bellman-Ford algorithm
Bellman-Ford(G,w, s)
1. for each vertex v 2 G.V initialization
2. v.d = 1
3. v.⇡ = NULL
4. s.d = 0
5. for i = 1 to |V |� 1 relaxation
6. for each edge (u, v) 2 G.E
7. if v.d > u.d+ w(u, v)
8. v.d = u.d+ w(u, v)
9. v.⇡ = u
10. for each edge (u, v) 2 G.E checking negative weight cycle
11. if v.d > u.d+ w(u, v)
12. return (FALSE)
13. return (TRUE)
Running time : O(|V ||E|)
Chapter 24. Single Source Shortest Paths
Bellman-Ford algorithm
Bellman-Ford(G,w, s)
1. for each vertex v 2 G.V initialization
2. v.d = 1
3. v.⇡ = NULL
4. s.d = 0
5. for i = 1 to |V |� 1 relaxation
6. for each edge (u, v) 2 G.E
7. if v.d > u.d+ w(u, v)
8. v.d = u.d+ w(u, v)
9. v.⇡ = u
10. for each edge (u, v) 2 G.E checking negative weight cycle
11. if v.d > u.d+ w(u, v)
12. return (FALSE)
13. return (TRUE)
Running time : O(|V ||E|)
Chapter 24. Single Source Shortest Paths
Bellman-Ford algorithm
Bellman-Ford(G,w, s)
1. for each vertex v 2 G.V initialization
2. v.d = 1
3. v.⇡ = NULL
4. s.d = 0
5. for i = 1 to |V |� 1 relaxation
6. for each edge (u, v) 2 G.E
7. if v.d > u.d+ w(u, v)
8. v.d = u.d+ w(u, v)
9. v.⇡ = u
10. for each edge (u, v) 2 G.E checking negative weight cycle
11. if v.d > u.d+ w(u, v)
12. return (FALSE)
13. return (TRUE)
Running time : O(|V ||E|)
Chapter 24. Single Source Shortest Paths
Bellman-Ford algorithm
Bellman-Ford(G,w, s)
1. for each vertex v 2 G.V initialization
2. v.d = 1
3. v.⇡ = NULL
4. s.d = 0
5. for i = 1 to |V |� 1 relaxation
6. for each edge (u, v) 2 G.E
7. if v.d > u.d+ w(u, v)
8. v.d = u.d+ w(u, v)
9. v.⇡ = u
10. for each edge (u, v) 2 G.E checking negative weight cycle
11. if v.d > u.d+ w(u, v)
12. return (FALSE)
13. return (TRUE)
Running time : O(|V ||E|)
Chapter 24. Single Source Shortest Paths
Bellman-Ford algorithm
Bellman-Ford(G,w, s)
1. for each vertex v 2 G.V initialization
2. v.d = 1
3. v.⇡ = NULL
4. s.d = 0
5. for i = 1 to |V |� 1 relaxation
6. for each edge (u, v) 2 G.E
7. if v.d > u.d+ w(u, v)
8. v.d = u.d+ w(u, v)
9. v.⇡ = u
10. for each edge (u, v) 2 G.E checking negative weight cycle
11. if v.d > u.d+ w(u, v)
12. return (FALSE)
13. return (TRUE)
Running time : O(|V ||E|)
Chapter 24. Single Source Shortest Paths
Bellman-Ford algorithm
Bellman-Ford(G,w, s)
1. for each vertex v 2 G.V initialization
2. v.d = 1
3. v.⇡ = NULL
4. s.d = 0
5. for i = 1 to |V |� 1 relaxation
6. for each edge (u, v) 2 G.E
7. if v.d > u.d+ w(u, v)
8. v.d = u.d+ w(u, v)
9. v.⇡ = u
10. for each edge (u, v) 2 G.E checking negative weight cycle
11. if v.d > u.d+ w(u, v)
12. return (FALSE)
13. return (TRUE)
Running time : O(|V ||E|)
Chapter 24. Single Source Shortest Paths
Bellman-Ford algorithm
Bellman-Ford(G,w, s)
1. for each vertex v 2 G.V initialization
2. v.d = 1
3. v.⇡ = NULL
4. s.d = 0
5. for i = 1 to |V |� 1 relaxation
6. for each edge (u, v) 2 G.E
7. if v.d > u.d+ w(u, v)
8. v.d = u.d+ w(u, v)
9. v.⇡ = u
10. for each edge (u, v) 2 G.E checking negative weight cycle
11. if v.d > u.d+ w(u, v)
12. return (FALSE)
13. return (TRUE)
Running time : O(|V ||E|)
Chapter 24. Single Source Shortest Paths
Bellman-Ford algorithm
Bellman-Ford(G,w, s)
1. for each vertex v 2 G.V initialization
2. v.d = 1
3. v.⇡ = NULL
4. s.d = 0
5. for i = 1 to |V |� 1 relaxation
6. for each edge (u, v) 2 G.E
7. if v.d > u.d+ w(u, v)
8. v.d = u.d+ w(u, v)
9. v.⇡ = u
10. for each edge (u, v) 2 G.E checking negative weight cycle
11. if v.d > u.d+ w(u, v)
12. return (FALSE)
13. return (TRUE)
Running time : O(|V ||E|)
Chapter 24. Single Source Shortest Paths
Bellman-Ford algorithm
Bellman-Ford(G,w, s)
1. for each vertex v 2 G.V initialization
2. v.d = 1
3. v.⇡ = NULL
4. s.d = 0
5. for i = 1 to |V |� 1 relaxation
6. for each edge (u, v) 2 G.E
7. if v.d > u.d+ w(u, v)
8. v.d = u.d+ w(u, v)
9. v.⇡ = u
10. for each edge (u, v) 2 G.E checking negative weight cycle
11. if v.d > u.d+ w(u, v)
12. return (FALSE)
13. return (TRUE)
Running time : O(|V ||E|)
Chapter 24. Single Source Shortest Paths
Bellman-Ford algorithm
Bellman-Ford(G,w, s)
1. for each vertex v 2 G.V initialization
2. v.d = 1
3. v.⇡ = NULL
4. s.d = 0
5. for i = 1 to |V |� 1 relaxation
6. for each edge (u, v) 2 G.E
7. if v.d > u.d+ w(u, v)
8. v.d = u.d+ w(u, v)
9. v.⇡ = u
10. for each edge (u, v) 2 G.E checking negative weight cycle
11. if v.d > u.d+ w(u, v)
12. return (FALSE)
13. return (TRUE)
Running time : O(|V ||E|)
Chapter 24. Single Source Shortest Paths
Bellman-Ford algorithm
Bellman-Ford(G,w, s)
1. for each vertex v 2 G.V initialization
2. v.d = 1
3. v.⇡ = NULL
4. s.d = 0
5. for i = 1 to |V |� 1 relaxation
6. for each edge (u, v) 2 G.E
7. if v.d > u.d+ w(u, v)
8. v.d = u.d+ w(u, v)
9. v.⇡ = u
10. for each edge (u, v) 2 G.E checking negative weight cycle
11. if v.d > u.d+ w(u, v)
12. return (FALSE)
13. return (TRUE)
Running time : O(|V ||E|)
Chapter 24. Single Source Shortest Paths
Bellman-Ford algorithm
Bellman-Ford(G,w, s)
1. for each vertex v 2 G.V initialization
2. v.d = 1
3. v.⇡ = NULL
4. s.d = 0
5. for i = 1 to |V |� 1 relaxation
6. for each edge (u, v) 2 G.E
7. if v.d > u.d+ w(u, v)
8. v.d = u.d+ w(u, v)
9. v.⇡ = u
10. for each edge (u, v) 2 G.E checking negative weight cycle
11. if v.d > u.d+ w(u, v)
12. return (FALSE)
13. return (TRUE)
Running time : O(|V ||E|)
Chapter 24. Single Source Shortest Paths
Bellman-Ford algorithm
Bellman-Ford(G,w, s)
1. for each vertex v 2 G.V initialization
2. v.d = 1
3. v.⇡ = NULL
4. s.d = 0
5. for i = 1 to |V |� 1 relaxation
6. for each edge (u, v) 2 G.E
7. if v.d > u.d+ w(u, v)
8. v.d = u.d+ w(u, v)
9. v.⇡ = u
10. for each edge (u, v) 2 G.E checking negative weight cycle
11. if v.d > u.d+ w(u, v)
12. return (FALSE)
13. return (TRUE)
Running time : O(|V ||E|)
Chapter 24. Single Source Shortest Paths
Bellman-Ford algorithm
Bellman-Ford(G,w, s)
1. for each vertex v 2 G.V initialization
2. v.d = 1
3. v.⇡ = NULL
4. s.d = 0
5. for i = 1 to |V |� 1 relaxation
6. for each edge (u, v) 2 G.E
7. if v.d > u.d+ w(u, v)
8. v.d = u.d+ w(u, v)
9. v.⇡ = u
10. for each edge (u, v) 2 G.E checking negative weight cycle
11. if v.d > u.d+ w(u, v)
12. return (FALSE)
13. return (TRUE)
Running time : O(|V ||E|)
Chapter 24. Single Source Shortest Paths
Chapter 24. Single Source Shortest Paths
Chapter 24. Single Source Shortest Paths
Properties of shortest paths and relaxation
Relax(u, v, w)
1. if v.d > u.d+ w(u, v)
2. v.d = u.d+ w(u, v)
3. v.⇡ = u
Lemma 24.14, Convergence property: Let s u ! v is a shortest
path. If u.d = �(s, u) holds before Relax(u, v, w) is called, then
v.d = �(s, v) after the call.
Proof: v.d u.d+ w(u, v) = �(s, u) + w(u, v) = �(s, v). So
v.d = �(s, v).
Chapter 24. Single Source Shortest Paths
Properties of shortest paths and relaxation
Relax(u, v, w)
1. if v.d > u.d+ w(u, v)
2. v.d = u.d+ w(u, v)
3. v.⇡ = u
Lemma 24.14, Convergence property: Let s u ! v is a shortest
path. If u.d = �(s, u) holds before Relax(u, v, w) is called, then
v.d = �(s, v) after the call.
Proof: v.d u.d+ w(u, v) = �(s, u) + w(u, v) = �(s, v). So
v.d = �(s, v).
Chapter 24. Single Source Shortest Paths
Properties of shortest paths and relaxation
Relax(u, v, w)
1. if v.d > u.d+ w(u, v)
2. v.d = u.d+ w(u, v)
3. v.⇡ = u
Lemma 24.14, Convergence property: Let s u ! v is a shortest
path. If u.d = �(s, u) holds before Relax(u, v, w) is called, then
v.d = �(s, v) after the call.
Proof: v.d u.d+ w(u, v) = �(s, u) + w(u, v) = �(s, v). So
v.d = �(s, v).
Chapter 24. Single Source Shortest Paths
Properties of shortest paths and relaxation
Relax(u, v, w)
1. if v.d > u.d+ w(u, v)
2. v.d = u.d+ w(u, v)
3. v.⇡ = u
Lemma 24.14, Convergence property: Let s u ! v is a shortest
path. If u.d = �(s, u) holds before Relax(u, v, w) is called, then
v.d = �(s, v) after the call.
Proof: v.d u.d+ w(u, v) = �(s, u) + w(u, v) = �(s, v). So
v.d = �(s, v).
Chapter 24. Single Source Shortest Paths
Properties of shortest paths and relaxation
Relax(u, v, w)
1. if v.d > u.d+ w(u, v)
2. v.d = u.d+ w(u, v)
3. v.⇡ = u
Lemma 24.14, Convergence property: Let s u ! v is a shortest
path. If u.d = �(s, u) holds before Relax(u, v, w) is called, then
v.d = �(s, v) after the call.
Proof: v.d u.d+ w(u, v) = �(s, u) + w(u, v) = �(s, v). So
v.d = �(s, v).
Chapter 24. Single Source Shortest Paths
Properties of shortest paths and relaxation
Relax(u, v, w)
1. if v.d > u.d+ w(u, v)
2. v.d = u.d+ w(u, v)
3. v.⇡ = u
Lemma 24.14, Convergence property: Let s u ! v is a shortest
path. If u.d = �(s, u) holds before Relax(u, v, w) is called, then
v.d = �(s, v) after the call.
Proof: v.d u.d+ w(u, v) = �(s, u) + w(u, v) = �(s, v). So
v.d = �(s, v).
Chapter 24. Single Source Shortest Paths
Properties of shortest paths and relaxation
Relax(u, v, w)
1. if v.d > u.d+ w(u, v)
2. v.d = u.d+ w(u, v)
3. v.⇡ = u
Lemma 24.14, Convergence property: Let s u ! v is a shortest
path. If u.d = �(s, u) holds before Relax(u, v, w) is called, then
v.d = �(s, v) after the call.
Proof:
v.d u.d+ w(u, v) = �(s, u) + w(u, v) = �(s, v). So
v.d = �(s, v).
Chapter 24. Single Source Shortest Paths
Properties of shortest paths and relaxation
Relax(u, v, w)
1. if v.d > u.d+ w(u, v)
2. v.d = u.d+ w(u, v)
3. v.⇡ = u
Lemma 24.14, Convergence property: Let s u ! v is a shortest
path. If u.d = �(s, u) holds before Relax(u, v, w) is called, then
v.d = �(s, v) after the call.
Proof: v.d u.d+ w(u, v)
= �(s, u) + w(u, v) = �(s, v). So
v.d = �(s, v).
Chapter 24. Single Source Shortest Paths
Properties of shortest paths and relaxation
Relax(u, v, w)
1. if v.d > u.d+ w(u, v)
2. v.d = u.d+ w(u, v)
3. v.⇡ = u
Lemma 24.14, Convergence property: Let s u ! v is a shortest
path. If u.d = �(s, u) holds before Relax(u, v, w) is called, then
v.d = �(s, v) after the call.
Proof: v.d u.d+ w(u, v) = �(s, u) + w(u, v)
= �(s, v). So
v.d = �(s, v).
Chapter 24. Single Source Shortest Paths
Properties of shortest paths and relaxation
Relax(u, v, w)
1. if v.d > u.d+ w(u, v)
2. v.d = u.d+ w(u, v)
3. v.⇡ = u
Lemma 24.14, Convergence property: Let s u ! v is a shortest
path. If u.d = �(s, u) holds before Relax(u, v, w) is called, then
v.d = �(s, v) after the call.
Proof: v.d u.d+ w(u, v) = �(s, u) + w(u, v) = �(s, v).
So
v.d = �(s, v).
Chapter 24. Single Source Shortest Paths
Properties of shortest paths and relaxation
Relax(u, v, w)
1. if v.d > u.d+ w(u, v)
2. v.d = u.d+ w(u, v)
3. v.⇡ = u
Lemma 24.14, Convergence property: Let s u ! v is a shortest
path. If u.d = �(s, u) holds before Relax(u, v, w) is called, then
v.d = �(s, v) after the call.
Proof: v.d u.d+ w(u, v) = �(s, u) + w(u, v) = �(s, v). So
v.d = �(s, v).
Chapter 24. Single Source Shortest Paths
We want to prove that, if a shortest path s v consists of k edges,
Bellman-Fold obtains value v.d = �(s, v) after the kth round of
relaxation (assuming there is no negative cycle).
Proof idea: Induction on k.
• k = 0, v can only be s. Proved!
• Assume the claim is proved for all vertices v that
have a shortest path of length k. What claim again??
• Let v be any vertex that has a shortest path s u ! v,
consisting of k + 1 edges;
Then s u is a shortest path for u consisting of k edges;
Now by assumption, u.d = �(s, u) after k round of relaxation.
By Convergence property Lemma, v.d = �(s, v)
after another round of relaxation.
Chapter 24. Single Source Shortest Paths
We want to prove that, if a shortest path s v consists of k edges,
Bellman-Fold obtains value v.d = �(s, v) after the kth round of
relaxation (assuming there is no negative cycle).
Proof idea: Induction on k.
• k = 0, v can only be s. Proved!
• Assume the claim is proved for all vertices v that
have a shortest path of length k. What claim again??
• Let v be any vertex that has a shortest path s u ! v,
consisting of k + 1 edges;
Then s u is a shortest path for u consisting of k edges;
Now by assumption, u.d = �(s, u) after k round of relaxation.
By Convergence property Lemma, v.d = �(s, v)
after another round of relaxation.
Chapter 24. Single Source Shortest Paths
We want to prove that, if a shortest path s v consists of k edges,
Bellman-Fold obtains value v.d = �(s, v) after the kth round of
relaxation (assuming there is no negative cycle).
Proof idea: Induction on k.
• k = 0, v can only be s. Proved!
• Assume the claim is proved for all vertices v that
have a shortest path of length k. What claim again??
• Let v be any vertex that has a shortest path s u ! v,
consisting of k + 1 edges;
Then s u is a shortest path for u consisting of k edges;
Now by assumption, u.d = �(s, u) after k round of relaxation.
By Convergence property Lemma, v.d = �(s, v)
after another round of relaxation.
Chapter 24. Single Source Shortest Paths
We want to prove that, if a shortest path s v consists of k edges,
Bellman-Fold obtains value v.d = �(s, v) after the kth round of
relaxation (assuming there is no negative cycle).
Proof idea: Induction on k.
• k = 0, v can only be s. Proved!
• Assume the claim is proved for all vertices v that
have a shortest path of length k.
What claim again??
• Let v be any vertex that has a shortest path s u ! v,
consisting of k + 1 edges;
Then s u is a shortest path for u consisting of k edges;
Now by assumption, u.d = �(s, u) after k round of relaxation.
By Convergence property Lemma, v.d = �(s, v)
after another round of relaxation.
Chapter 24. Single Source Shortest Paths
We want to prove that, if a shortest path s v consists of k edges,
Bellman-Fold obtains value v.d = �(s, v) after the kth round of
relaxation (assuming there is no negative cycle).
Proof idea: Induction on k.
• k = 0, v can only be s. Proved!
• Assume the claim is proved for all vertices v that
have a shortest path of length k. What claim again??
• Let v be any vertex that has a shortest path s u ! v,
consisting of k + 1 edges;
Then s u is a shortest path for u consisting of k edges;
Now by assumption, u.d = �(s, u) after k round of relaxation.
By Convergence property Lemma, v.d = �(s, v)
after another round of relaxation.
Chapter 24. Single Source Shortest Paths
We want to prove that, if a shortest path s v consists of k edges,
Bellman-Fold obtains value v.d = �(s, v) after the kth round of
relaxation (assuming there is no negative cycle).
Proof idea: Induction on k.
• k = 0, v can only be s. Proved!
• Assume the claim is proved for all vertices v that
have a shortest path of length k. What claim again??
• Let v be any vertex that has a shortest path s u ! v,
consisting of k + 1 edges;
Then s u is a shortest path for u consisting of k edges;
Now by assumption, u.d = �(s, u) after k round of relaxation.
By Convergence property Lemma, v.d = �(s, v)
after another round of relaxation.
Chapter 24. Single Source Shortest Paths
We want to prove that, if a shortest path s v consists of k edges,
Bellman-Fold obtains value v.d = �(s, v) after the kth round of
relaxation (assuming there is no negative cycle).
Proof idea: Induction on k.
• k = 0, v can only be s. Proved!
• Assume the claim is proved for all vertices v that
have a shortest path of length k. What claim again??
• Let v be any vertex that has a shortest path s u ! v,
consisting of k + 1 edges;
Then s u is a shortest path for u consisting of k edges;
Now by assumption, u.d = �(s, u) after k round of relaxation.
By Convergence property Lemma, v.d = �(s, v)
after another round of relaxation.
Chapter 24. Single Source Shortest Paths
We want to prove that, if a shortest path s v consists of k edges,
Bellman-Fold obtains value v.d = �(s, v) after the kth round of
relaxation (assuming there is no negative cycle).
Proof idea: Induction on k.
• k = 0, v can only be s. Proved!
• Assume the claim is proved for all vertices v that
have a shortest path of length k. What claim again??
• Let v be any vertex that has a shortest path s u ! v,
consisting of k + 1 edges;
Then s u is a shortest path for u consisting of k edges;
Now by assumption, u.d = �(s, u) after k round of relaxation.
By Convergence property Lemma, v.d = �(s, v)
after another round of relaxation.
Chapter 24. Single Source Shortest Paths
We want to prove that, if a shortest path s v consists of k edges,
Bellman-Fold obtains value v.d = �(s, v) after the kth round of
relaxation (assuming there is no negative cycle).
Proof idea: Induction on k.
• k = 0, v can only be s. Proved!
• Assume the claim is proved for all vertices v that
have a shortest path of length k. What claim again??
• Let v be any vertex that has a shortest path s u ! v,
consisting of k + 1 edges;
Then s u is a shortest path for u consisting of k edges;
Now by assumption, u.d = �(s, u) after k round of relaxation.
By Convergence property Lemma, v.d = �(s, v)
after another round of relaxation.
Chapter 24. Single Source Shortest Paths
Lemma 24.15, Path-relaxation property: Let p = (v0, v1, . . . , vk) be a shortest path
from s = v0 to vk. If a sequence relaxation steps occur that includes, in order, relaxing
the edges (v0, v1), (v1, v2), . . . , (vk�1, vk), then vk.d = �(s, vk) after these relaxations
and at all times afterward. This property holds no matter what other edge relaxations
occur, including relaxations that are intermixed with relaxations of these edges.
Proof: We prove by induction on i that after the ith edge (vi�1, vi) on
path p is relaxed, vi.d = �(s, vi).
basis: i = 0. v0 = s, s.d = 0 = �(s, s) !
Assume: vi�1.d = �(s, vi�1).
Induction: After we relax edge (vi�1, vi), by convergence property, we
have vi.d = �(s, vi). And this holds for all times afterward.
Chapter 24. Single Source Shortest Paths
Lemma 24.15, Path-relaxation property: Let p = (v0, v1, . . . , vk) be a shortest path
from s = v0 to vk. If a sequence relaxation steps occur that includes, in order, relaxing
the edges (v0, v1), (v1, v2), . . . , (vk�1, vk), then vk.d = �(s, vk) after these relaxations
and at all times afterward. This property holds no matter what other edge relaxations
occur, including relaxations that are intermixed with relaxations of these edges.
Proof:
We prove by induction on i that after the ith edge (vi�1, vi) on
path p is relaxed, vi.d = �(s, vi).
basis: i = 0. v0 = s, s.d = 0 = �(s, s) !
Assume: vi�1.d = �(s, vi�1).
Induction: After we relax edge (vi�1, vi), by convergence property, we
have vi.d = �(s, vi). And this holds for all times afterward.
Chapter 24. Single Source Shortest Paths
Lemma 24.15, Path-relaxation property: Let p = (v0, v1, . . . , vk) be a shortest path
from s = v0 to vk. If a sequence relaxation steps occur that includes, in order, relaxing
the edges (v0, v1), (v1, v2), . . . , (vk�1, vk), then vk.d = �(s, vk) after these relaxations
and at all times afterward. This property holds no matter what other edge relaxations
occur, including relaxations that are intermixed with relaxations of these edges.
Proof: We prove by induction on i that after the ith edge (vi�1, vi) on
path p is relaxed, vi.d = �(s, vi).
basis: i = 0. v0 = s, s.d = 0 = �(s, s) !
Assume: vi�1.d = �(s, vi�1).
Induction: After we relax edge (vi�1, vi), by convergence property, we
have vi.d = �(s, vi). And this holds for all times afterward.
Chapter 24. Single Source Shortest Paths
Lemma 24.15, Path-relaxation property: Let p = (v0, v1, . . . , vk) be a shortest path
from s = v0 to vk. If a sequence relaxation steps occur that includes, in order, relaxing
the edges (v0, v1), (v1, v2), . . . , (vk�1, vk), then vk.d = �(s, vk) after these relaxations
and at all times afterward. This property holds no matter what other edge relaxations
occur, including relaxations that are intermixed with relaxations of these edges.
Proof: We prove by induction on i that after the ith edge (vi�1, vi) on
path p is relaxed, vi.d = �(s, vi).
basis: i = 0. v0 = s, s.d = 0 = �(s, s) !
Assume: vi�1.d = �(s, vi�1).
Induction: After we relax edge (vi�1, vi), by convergence property, we
have vi.d = �(s, vi). And this holds for all times afterward.
Chapter 24. Single Source Shortest Paths
Lemma 24.15, Path-relaxation property: Let p = (v0, v1, . . . , vk) be a shortest path
from s = v0 to vk. If a sequence relaxation steps occur that includes, in order, relaxing
the edges (v0, v1), (v1, v2), . . . , (vk�1, vk), then vk.d = �(s, vk) after these relaxations
and at all times afterward. This property holds no matter what other edge relaxations
occur, including relaxations that are intermixed with relaxations of these edges.
Proof: We prove by induction on i that after the ith edge (vi�1, vi) on
path p is relaxed, vi.d = �(s, vi).
basis: i = 0. v0 = s, s.d = 0 = �(s, s) !
Assume: vi�1.d = �(s, vi�1).
Induction: After we relax edge (vi�1, vi), by convergence property, we
have vi.d = �(s, vi). And this holds for all times afterward.
Chapter 24. Single Source Shortest Paths
Lemma 24.15, Path-relaxation property: Let p = (v0, v1, . . . , vk) be a shortest path
from s = v0 to vk. If a sequence relaxation steps occur that includes, in order, relaxing
the edges (v0, v1), (v1, v2), . . . , (vk�1, vk), then vk.d = �(s, vk) after these relaxations
and at all times afterward. This property holds no matter what other edge relaxations
occur, including relaxations that are intermixed with relaxations of these edges.
Proof: We prove by induction on i that after the ith edge (vi�1, vi) on
path p is relaxed, vi.d = �(s, vi).
basis: i = 0. v0 = s, s.d = 0 = �(s, s) !
Assume: vi�1.d = �(s, vi�1).
Induction: After we relax edge (vi�1, vi), by convergence property, we
have vi.d = �(s, vi).
And this holds for all times afterward.
Chapter 24. Single Source Shortest Paths
Lemma 24.15, Path-relaxation property: Let p = (v0, v1, . . . , vk) be a shortest path
from s = v0 to vk. If a sequence relaxation steps occur that includes, in order, relaxing
the edges (v0, v1), (v1, v2), . . . , (vk�1, vk), then vk.d = �(s, vk) after these relaxations
and at all times afterward. This property holds no matter what other edge relaxations
occur, including relaxations that are intermixed with relaxations of these edges.
Proof: We prove by induction on i that after the ith edge (vi�1, vi) on
path p is relaxed, vi.d = �(s, vi).
basis: i = 0. v0 = s, s.d = 0 = �(s, s) !
Assume: vi�1.d = �(s, vi�1).
Induction: After we relax edge (vi�1, vi), by convergence property, we
have vi.d = �(s, vi). And this holds for all times afterward.
Chapter 24. Single Source Shortest Paths
Correctness of Bellman-Ford algorithm
1. On graphs without negative cycles)
Lemma 24.2 Let G = (V,E) be a weighted, directed graph with source s and weight
function w : E ! R and assume that G contains no negative weight cycles that can
be reached from s. Then after |V |� 1 iterations of line 5 in the algorithm,
v.d = �(s, v) for all vertices v that are reachable from s.
Proof: (By induction on k, the number of edges on the computed path p: s
p v, to
prove the claim to be true).
Base: k = 0. v = s. It is true.
Assume: the claim is true for k � 1.
Induction: computed path p: s
p v has k edges and
p arrives at x before reaching v via (x, v). So v.d = x.d+ w(x, v)
By Lemma 24.1, �(s, v) = �(s, y) + w(y, v) for some y.
Since after k iterations, v.d has been updated with the statement
if v.d > u.d+w(u, v) then v.d = u.d+w(u, v), for all u, including x, y
By the assumption, for every u, including x and y, u.d = �(s, u) because
the computed path s u contains k � 1 edges. So we have
v.d = x.d+ w(x, v) y.d+ w(y, v) = �(s, y) + w(y, v) = �(s, v)
Chapter 24. Single Source Shortest Paths
Correctness of Bellman-Ford algorithm
1. On graphs without negative cycles)
Lemma 24.2 Let G = (V,E) be a weighted, directed graph with source s and weight
function w : E ! R and assume that G contains no negative weight cycles that can
be reached from s. Then after |V |� 1 iterations of line 5 in the algorithm,
v.d = �(s, v) for all vertices v that are reachable from s.
Proof: (By induction on k, the number of edges on the computed path p: s
p v, to
prove the claim to be true).
Base: k = 0. v = s. It is true.
Assume: the claim is true for k � 1.
Induction: computed path p: s
p v has k edges and
p arrives at x before reaching v via (x, v). So v.d = x.d+ w(x, v)
By Lemma 24.1, �(s, v) = �(s, y) + w(y, v) for some y.
Since after k iterations, v.d has been updated with the statement
if v.d > u.d+w(u, v) then v.d = u.d+w(u, v), for all u, including x, y
By the assumption, for every u, including x and y, u.d = �(s, u) because
the computed path s u contains k � 1 edges. So we have
v.d = x.d+ w(x, v) y.d+ w(y, v) = �(s, y) + w(y, v) = �(s, v)
Chapter 24. Single Source Shortest Paths
Correctness of Bellman-Ford algorithm
1. On graphs without negative cycles)
Lemma 24.2 Let G = (V,E) be a weighted, directed graph with source s and weight
function w : E ! R and assume that G contains no negative weight cycles that can
be reached from s. Then after |V |� 1 iterations of line 5 in the algorithm,
v.d = �(s, v) for all vertices v that are reachable from s.
Proof: (By induction on k, the number of edges on the computed path p: s
p v, to
prove the claim to be true).
Base: k = 0. v = s. It is true.
Assume: the claim is true for k � 1.
Induction: computed path p: s
p v has k edges and
p arrives at x before reaching v via (x, v). So v.d = x.d+ w(x, v)
By Lemma 24.1, �(s, v) = �(s, y) + w(y, v) for some y.
Since after k iterations, v.d has been updated with the statement
if v.d > u.d+w(u, v) then v.d = u.d+w(u, v), for all u, including x, y
By the assumption, for every u, including x and y, u.d = �(s, u) because
the computed path s u contains k � 1 edges. So we have
v.d = x.d+ w(x, v) y.d+ w(y, v) = �(s, y) + w(y, v) = �(s, v)
Chapter 24. Single Source Shortest Paths
Correctness of Bellman-Ford algorithm
1. On graphs without negative cycles)
Lemma 24.2 Let G = (V,E) be a weighted, directed graph with source s and weight
function w : E ! R and assume that G contains no negative weight cycles that can
be reached from s. Then after |V |� 1 iterations of line 5 in the algorithm,
v.d = �(s, v) for all vertices v that are reachable from s.
Proof: (By induction on k,
the number of edges on the computed path p: s
p v, to
prove the claim to be true).
Base: k = 0. v = s. It is true.
Assume: the claim is true for k � 1.
Induction: computed path p: s
p v has k edges and
p arrives at x before reaching v via (x, v). So v.d = x.d+ w(x, v)
By Lemma 24.1, �(s, v) = �(s, y) + w(y, v) for some y.
Since after k iterations, v.d has been updated with the statement
if v.d > u.d+w(u, v) then v.d = u.d+w(u, v), for all u, including x, y
By the assumption, for every u, including x and y, u.d = �(s, u) because
the computed path s u contains k � 1 edges. So we have
v.d = x.d+ w(x, v) y.d+ w(y, v) = �(s, y) + w(y, v) = �(s, v)
Chapter 24. Single Source Shortest Paths
Correctness of Bellman-Ford algorithm
1. On graphs without negative cycles)
Lemma 24.2 Let G = (V,E) be a weighted, directed graph with source s and weight
function w : E ! R and assume that G contains no negative weight cycles that can
be reached from s. Then after |V |� 1 iterations of line 5 in the algorithm,
v.d = �(s, v) for all vertices v that are reachable from s.
Proof: (By induction on k, the number of edges on the computed path p: s
p v, to
prove the claim to be true).
Base: k = 0. v = s. It is true.
Assume: the claim is true for k � 1.
Induction: computed path p: s
p v has k edges and
p arrives at x before reaching v via (x, v). So v.d = x.d+ w(x, v)
By Lemma 24.1, �(s, v) = �(s, y) + w(y, v) for some y.
Since after k iterations, v.d has been updated with the statement
if v.d > u.d+w(u, v) then v.d = u.d+w(u, v), for all u, including x, y
By the assumption, for every u, including x and y, u.d = �(s, u) because
the computed path s u contains k � 1 edges. So we have
v.d = x.d+ w(x, v) y.d+ w(y, v) = �(s, y) + w(y, v) = �(s, v)
Chapter 24. Single Source Shortest Paths
Correctness of Bellman-Ford algorithm
1. On graphs without negative cycles)
Lemma 24.2 Let G = (V,E) be a weighted, directed graph with source s and weight
function w : E ! R and assume that G contains no negative weight cycles that can
be reached from s. Then after |V |� 1 iterations of line 5 in the algorithm,
v.d = �(s, v) for all vertices v that are reachable from s.
Proof: (By induction on k, the number of edges on the computed path p: s
p v, to
prove the claim to be true).
Base: k = 0. v = s. It is true.
Assume: the claim is true for k � 1.
Induction: computed path p: s
p v has k edges and
p arrives at x before reaching v via (x, v). So v.d = x.d+ w(x, v)
By Lemma 24.1, �(s, v) = �(s, y) + w(y, v) for some y.
Since after k iterations, v.d has been updated with the statement
if v.d > u.d+w(u, v) then v.d = u.d+w(u, v), for all u, including x, y
By the assumption, for every u, including x and y, u.d = �(s, u) because
the computed path s u contains k � 1 edges. So we have
v.d = x.d+ w(x, v) y.d+ w(y, v) = �(s, y) + w(y, v) = �(s, v)
Chapter 24. Single Source Shortest Paths
Correctness of Bellman-Ford algorithm
1. On graphs without negative cycles)
Lemma 24.2 Let G = (V,E) be a weighted, directed graph with source s and weight
function w : E ! R and assume that G contains no negative weight cycles that can
be reached from s. Then after |V |� 1 iterations of line 5 in the algorithm,
v.d = �(s, v) for all vertices v that are reachable from s.
Proof: (By induction on k, the number of edges on the computed path p: s
p v, to
prove the claim to be true).
Base: k = 0. v = s. It is true.
Assume: the claim is true for k � 1.
Induction: computed path p: s
p v has k edges and
p arrives at x before reaching v via (x, v). So v.d = x.d+ w(x, v)
By Lemma 24.1, �(s, v) = �(s, y) + w(y, v) for some y.
Since after k iterations, v.d has been updated with the statement
if v.d > u.d+w(u, v) then v.d = u.d+w(u, v), for all u, including x, y
By the assumption, for every u, including x and y, u.d = �(s, u) because
the computed path s u contains k � 1 edges. So we have
v.d = x.d+ w(x, v) y.d+ w(y, v) = �(s, y) + w(y, v) = �(s, v)
Chapter 24. Single Source Shortest Paths
Correctness of Bellman-Ford algorithm
1. On graphs without negative cycles)
Lemma 24.2 Let G = (V,E) be a weighted, directed graph with source s and weight
function w : E ! R and assume that G contains no negative weight cycles that can
be reached from s. Then after |V |� 1 iterations of line 5 in the algorithm,
v.d = �(s, v) for all vertices v that are reachable from s.
Proof: (By induction on k, the number of edges on the computed path p: s
p v, to
prove the claim to be true).
Base: k = 0. v = s. It is true.
Assume: the claim is true for k � 1.
Induction: computed path p: s
p v has k edges and
p arrives at x before reaching v via (x, v). So v.d = x.d+ w(x, v)
By Lemma 24.1, �(s, v) = �(s, y) + w(y, v) for some y.
Since after k iterations, v.d has been updated with the statement
if v.d > u.d+w(u, v) then v.d = u.d+w(u, v), for all u, including x, y
By the assumption, for every u, including x and y, u.d = �(s, u) because
the computed path s u contains k � 1 edges. So we have
v.d = x.d+ w(x, v) y.d+ w(y, v) = �(s, y) + w(y, v) = �(s, v)
Chapter 24. Single Source Shortest Paths
Correctness of Bellman-Ford algorithm
1. On graphs without negative cycles)
Lemma 24.2 Let G = (V,E) be a weighted, directed graph with source s and weight
function w : E ! R and assume that G contains no negative weight cycles that can
be reached from s. Then after |V |� 1 iterations of line 5 in the algorithm,
v.d = �(s, v) for all vertices v that are reachable from s.
Proof: (By induction on k, the number of edges on the computed path p: s
p v, to
prove the claim to be true).
Base: k = 0. v = s. It is true.
Assume: the claim is true for k � 1.
Induction: computed path p: s
p v has k edges and
p arrives at x before reaching v via (x, v).
So v.d = x.d+ w(x, v)
By Lemma 24.1, �(s, v) = �(s, y) + w(y, v) for some y.
Since after k iterations, v.d has been updated with the statement
if v.d > u.d+w(u, v) then v.d = u.d+w(u, v), for all u, including x, y
By the assumption, for every u, including x and y, u.d = �(s, u) because
the computed path s u contains k � 1 edges. So we have
v.d = x.d+ w(x, v) y.d+ w(y, v) = �(s, y) + w(y, v) = �(s, v)
Chapter 24. Single Source Shortest Paths
Correctness of Bellman-Ford algorithm
1. On graphs without negative cycles)
Lemma 24.2 Let G = (V,E) be a weighted, directed graph with source s and weight
function w : E ! R and assume that G contains no negative weight cycles that can
be reached from s. Then after |V |� 1 iterations of line 5 in the algorithm,
v.d = �(s, v) for all vertices v that are reachable from s.
Proof: (By induction on k, the number of edges on the computed path p: s
p v, to
prove the claim to be true).
Base: k = 0. v = s. It is true.
Assume: the claim is true for k � 1.
Induction: computed path p: s
p v has k edges and
p arrives at x before reaching v via (x, v). So v.d = x.d+ w(x, v)
By Lemma 24.1, �(s, v) = �(s, y) + w(y, v) for some y.
Since after k iterations, v.d has been updated with the statement
if v.d > u.d+w(u, v) then v.d = u.d+w(u, v), for all u, including x, y
By the assumption, for every u, including x and y, u.d = �(s, u) because
the computed path s u contains k � 1 edges. So we have
v.d = x.d+ w(x, v) y.d+ w(y, v) = �(s, y) + w(y, v) = �(s, v)
Chapter 24. Single Source Shortest Paths
Correctness of Bellman-Ford algorithm
1. On graphs without negative cycles)
Lemma 24.2 Let G = (V,E) be a weighted, directed graph with source s and weight
function w : E ! R and assume that G contains no negative weight cycles that can
be reached from s. Then after |V |� 1 iterations of line 5 in the algorithm,
v.d = �(s, v) for all vertices v that are reachable from s.
Proof: (By induction on k, the number of edges on the computed path p: s
p v, to
prove the claim to be true).
Base: k = 0. v = s. It is true.
Assume: the claim is true for k � 1.
Induction: computed path p: s
p v has k edges and
p arrives at x before reaching v via (x, v). So v.d = x.d+ w(x, v)
By Lemma 24.1, �(s, v) = �(s, y) + w(y, v) for some y.
Since after k iterations, v.d has been updated with the statement
if v.d > u.d+w(u, v) then v.d = u.d+w(u, v), for all u, including x, y
By the assumption, for every u, including x and y, u.d = �(s, u) because
the computed path s u contains k � 1 edges. So we have
v.d = x.d+ w(x, v) y.d+ w(y, v) = �(s, y) + w(y, v) = �(s, v)
Chapter 24. Single Source Shortest Paths
Correctness of Bellman-Ford algorithm
1. On graphs without negative cycles)
Lemma 24.2 Let G = (V,E) be a weighted, directed graph with source s and weight
function w : E ! R and assume that G contains no negative weight cycles that can
be reached from s. Then after |V |� 1 iterations of line 5 in the algorithm,
v.d = �(s, v) for all vertices v that are reachable from s.
Proof: (By induction on k, the number of edges on the computed path p: s
p v, to
prove the claim to be true).
Base: k = 0. v = s. It is true.
Assume: the claim is true for k � 1.
Induction: computed path p: s
p v has k edges and
p arrives at x before reaching v via (x, v). So v.d = x.d+ w(x, v)
By Lemma 24.1, �(s, v) = �(s, y) + w(y, v) for some y.
Since after k iterations, v.d has been updated with the statement
if v.d > u.d+w(u, v) then v.d = u.d+w(u, v), for all u, including x, y
By the assumption, for every u, including x and y, u.d = �(s, u) because
the computed path s u contains k � 1 edges. So we have
v.d = x.d+ w(x, v) y.d+ w(y, v) = �(s, y) + w(y, v) = �(s, v)
Chapter 24. Single Source Shortest Paths
Correctness of Bellman-Ford algorithm
1. On graphs without negative cycles)
Lemma 24.2 Let G = (V,E) be a weighted, directed graph with source s and weight
function w : E ! R and assume that G contains no negative weight cycles that can
be reached from s. Then after |V |� 1 iterations of line 5 in the algorithm,
v.d = �(s, v) for all vertices v that are reachable from s.
Proof: (By induction on k, the number of edges on the computed path p: s
p v, to
prove the claim to be true).
Base: k = 0. v = s. It is true.
Assume: the claim is true for k � 1.
Induction: computed path p: s
p v has k edges and
p arrives at x before reaching v via (x, v). So v.d = x.d+ w(x, v)
By Lemma 24.1, �(s, v) = �(s, y) + w(y, v) for some y.
Since after k iterations, v.d has been updated with the statement
if v.d > u.d+w(u, v) then v.d = u.d+w(u, v), for all u, including x, y
By the assumption, for every u, including x and y, u.d = �(s, u) because
the computed path s u contains k � 1 edges. So we have
v.d = x.d+ w(x, v) y.d+ w(y, v) = �(s, y) + w(y, v) = �(s, v)
Chapter 24. Single Source Shortest Paths
Correctness of Bellman-Ford algorithm
1. On graphs without negative cycles)
Lemma 24.2 Let G = (V,E) be a weighted, directed graph with source s and weight
function w : E ! R and assume that G contains no negative weight cycles that can
be reached from s. Then after |V |� 1 iterations of line 5 in the algorithm,
v.d = �(s, v) for all vertices v that are reachable from s.
Proof: (By induction on k, the number of edges on the computed path p: s
p v, to
prove the claim to be true).
Base: k = 0. v = s. It is true.
Assume: the claim is true for k � 1.
Induction: computed path p: s
p v has k edges and
p arrives at x before reaching v via (x, v). So v.d = x.d+ w(x, v)
By Lemma 24.1, �(s, v) = �(s, y) + w(y, v) for some y.
Since after k iterations, v.d has been updated with the statement
if v.d > u.d+w(u, v) then v.d = u.d+w(u, v), for all u, including x, y
By the assumption, for every u, including x and y, u.d = �(s, u) because
the computed path s u contains k � 1 edges. So we have
v.d = x.d+ w(x, v)
y.d+ w(y, v) = �(s, y) + w(y, v) = �(s, v)
Chapter 24. Single Source Shortest Paths
Correctness of Bellman-Ford algorithm
1. On graphs without negative cycles)
Lemma 24.2 Let G = (V,E) be a weighted, directed graph with source s and weight
function w : E ! R and assume that G contains no negative weight cycles that can
be reached from s. Then after |V |� 1 iterations of line 5 in the algorithm,
v.d = �(s, v) for all vertices v that are reachable from s.
Proof: (By induction on k, the number of edges on the computed path p: s
p v, to
prove the claim to be true).
Base: k = 0. v = s. It is true.
Assume: the claim is true for k � 1.
Induction: computed path p: s
p v has k edges and
p arrives at x before reaching v via (x, v). So v.d = x.d+ w(x, v)
By Lemma 24.1, �(s, v) = �(s, y) + w(y, v) for some y.
Since after k iterations, v.d has been updated with the statement
if v.d > u.d+w(u, v) then v.d = u.d+w(u, v), for all u, including x, y
By the assumption, for every u, including x and y, u.d = �(s, u) because
the computed path s u contains k � 1 edges. So we have
v.d = x.d+ w(x, v) y.d+ w(y, v)
= �(s, y) + w(y, v) = �(s, v)
Chapter 24. Single Source Shortest Paths
Correctness of Bellman-Ford algorithm
1. On graphs without negative cycles)
Lemma 24.2 Let G = (V,E) be a weighted, directed graph with source s and weight
function w : E ! R and assume that G contains no negative weight cycles that can
be reached from s. Then after |V |� 1 iterations of line 5 in the algorithm,
v.d = �(s, v) for all vertices v that are reachable from s.
Proof: (By induction on k, the number of edges on the computed path p: s
p v, to
prove the claim to be true).
Base: k = 0. v = s. It is true.
Assume: the claim is true for k � 1.
Induction: computed path p: s
p v has k edges and
p arrives at x before reaching v via (x, v). So v.d = x.d+ w(x, v)
By Lemma 24.1, �(s, v) = �(s, y) + w(y, v) for some y.
Since after k iterations, v.d has been updated with the statement
if v.d > u.d+w(u, v) then v.d = u.d+w(u, v), for all u, including x, y
By the assumption, for every u, including x and y, u.d = �(s, u) because
the computed path s u contains k � 1 edges. So we have
v.d = x.d+ w(x, v) y.d+ w(y, v) = �(s, y) + w(y, v) = �(s, v)
Chapter 24. Single Source Shortest Paths
Theorem 24.4 Bellman-Ford algorithm is correct on weighted, directed
graphs.
Proof: By Lemma 24.2, we only need to show, when G contains a negative weight
cycle reachable from s, the algorithm returns FALSE.
Let the cycle to be c = (v0, v1, · · · , vk), where v0 = vk and
kX
i=1
w(vi�1, vi) < 0
Assume for all i, vi.d vi�1.d+ w(vi�1, vi).
kX
i=1
vi.d
kX
i=1
vi�1.d+ w(vi�1, vi),
But
kX
i=1
vi.d =
kX
i=1
vi�1.d, implying
kX
i=1
w(vi�1, vi) � 0
contradicting to c being a negative cycle where
kP
i=1
w(vi�1, vi) < 0
Chapter 24. Single Source Shortest Paths
Theorem 24.4 Bellman-Ford algorithm is correct on weighted, directed
graphs.
Proof: By Lemma 24.2, we only need to show, when G contains a negative weight
cycle reachable from s, the algorithm returns FALSE.
Let the cycle to be c = (v0, v1, · · · , vk), where v0 = vk and
kX
i=1
w(vi�1, vi) < 0
Assume for all i, vi.d vi�1.d+ w(vi�1, vi).
kX
i=1
vi.d
kX
i=1
vi�1.d+ w(vi�1, vi),
But
kX
i=1
vi.d =
kX
i=1
vi�1.d, implying
kX
i=1
w(vi�1, vi) � 0
contradicting to c being a negative cycle where
kP
i=1
w(vi�1, vi) < 0
Chapter 24. Single Source Shortest Paths
Theorem 24.4 Bellman-Ford algorithm is correct on weighted, directed
graphs.
Proof: By Lemma 24.2, we only need to show, when G contains a negative weight
cycle reachable from s, the algorithm returns FALSE.
Let the cycle to be c = (v0, v1, · · · , vk), where v0 = vk and
kX
i=1
w(vi�1, vi) < 0
Assume for all i, vi.d vi�1.d+ w(vi�1, vi).
kX
i=1
vi.d
kX
i=1
vi�1.d+ w(vi�1, vi),
But
kX
i=1
vi.d =
kX
i=1
vi�1.d, implying
kX
i=1
w(vi�1, vi) � 0
contradicting to c being a negative cycle where
kP
i=1
w(vi�1, vi) < 0
Chapter 24. Single Source Shortest Paths
Theorem 24.4 Bellman-Ford algorithm is correct on weighted, directed
graphs.
Proof: By Lemma 24.2, we only need to show, when G contains a negative weight
cycle reachable from s, the algorithm returns FALSE.
Let the cycle to be c = (v0, v1, · · · , vk), where v0 = vk and
kX
i=1
w(vi�1, vi) < 0
Assume for all i, vi.d vi�1.d+ w(vi�1, vi).
kX
i=1
vi.d
kX
i=1
vi�1.d+ w(vi�1, vi),
But
kX
i=1
vi.d =
kX
i=1
vi�1.d, implying
kX
i=1
w(vi�1, vi) � 0
contradicting to c being a negative cycle where
kP
i=1
w(vi�1, vi) < 0
Chapter 24. Single Source Shortest Paths
Theorem 24.4 Bellman-Ford algorithm is correct on weighted, directed
graphs.
Proof: By Lemma 24.2, we only need to show, when G contains a negative weight
cycle reachable from s, the algorithm returns FALSE.
Let the cycle to be c = (v0, v1, · · · , vk), where v0 = vk and
kX
i=1
w(vi�1, vi) < 0
Assume for all i, vi.d vi�1.d+ w(vi�1, vi).
kX
i=1
vi.d
kX
i=1
vi�1.d+ w(vi�1, vi),
But
kX
i=1
vi.d =
kX
i=1
vi�1.d, implying
kX
i=1
w(vi�1, vi) � 0
contradicting to c being a negative cycle where
kP
i=1
w(vi�1, vi) < 0
Chapter 24. Single Source Shortest Paths
Theorem 24.4 Bellman-Ford algorithm is correct on weighted, directed
graphs.
Proof: By Lemma 24.2, we only need to show, when G contains a negative weight
cycle reachable from s, the algorithm returns FALSE.
Let the cycle to be c = (v0, v1, · · · , vk), where v0 = vk and
kX
i=1
w(vi�1, vi) < 0
Assume for all i, vi.d vi�1.d+ w(vi�1, vi).
kX
i=1
vi.d
kX
i=1
vi�1.d+ w(vi�1, vi),
But
kX
i=1
vi.d =
kX
i=1
vi�1.d, implying
kX
i=1
w(vi�1, vi) � 0
contradicting to c being a negative cycle where
kP
i=1
w(vi�1, vi) < 0
Chapter 24. Single Source Shortest Paths
Theorem 24.4 Bellman-Ford algorithm is correct on weighted, directed
graphs.
Proof: By Lemma 24.2, we only need to show, when G contains a negative weight
cycle reachable from s, the algorithm returns FALSE.
Let the cycle to be c = (v0, v1, · · · , vk), where v0 = vk and
kX
i=1
w(vi�1, vi) < 0
Assume for all i, vi.d vi�1.d+ w(vi�1, vi).
kX
i=1
vi.d
kX
i=1
vi�1.d+ w(vi�1, vi),
But
kX
i=1
vi.d =
kX
i=1
vi�1.d, implying
kX
i=1
w(vi�1, vi) � 0
contradicting to c being a negative cycle where
kP
i=1
w(vi�1, vi) < 0
Chapter 24. Single Source Shortest Paths
Theorem 24.4 Bellman-Ford algorithm is correct on weighted, directed
graphs.
Proof: By Lemma 24.2, we only need to show, when G contains a negative weight
cycle reachable from s, the algorithm returns FALSE.
Let the cycle to be c = (v0, v1, · · · , vk), where v0 = vk and
kX
i=1
w(vi�1, vi) < 0
Assume for all i, vi.d vi�1.d+ w(vi�1, vi).
kX
i=1
vi.d
kX
i=1
vi�1.d+ w(vi�1, vi),
But
kX
i=1
vi.d =
kX
i=1
vi�1.d,
implying
kX
i=1
w(vi�1, vi) � 0
contradicting to c being a negative cycle where
kP
i=1
w(vi�1, vi) < 0
Chapter 24. Single Source Shortest Paths
Theorem 24.4 Bellman-Ford algorithm is correct on weighted, directed
graphs.
Proof: By Lemma 24.2, we only need to show, when G contains a negative weight
cycle reachable from s, the algorithm returns FALSE.
Let the cycle to be c = (v0, v1, · · · , vk), where v0 = vk and
kX
i=1
w(vi�1, vi) < 0
Assume for all i, vi.d vi�1.d+ w(vi�1, vi).
kX
i=1
vi.d
kX
i=1
vi�1.d+ w(vi�1, vi),
But
kX
i=1
vi.d =
kX
i=1
vi�1.d, implying
kX
i=1
w(vi�1, vi) � 0
contradicting to c being a negative cycle where
kP
i=1
w(vi�1, vi) < 0
Chapter 24. Single Source Shortest Paths
Theorem 24.4 Bellman-Ford algorithm is correct on weighted, directed
graphs.
Proof: By Lemma 24.2, we only need to show, when G contains a negative weight
cycle reachable from s, the algorithm returns FALSE.
Let the cycle to be c = (v0, v1, · · · , vk), where v0 = vk and
kX
i=1
w(vi�1, vi) < 0
Assume for all i, vi.d vi�1.d+ w(vi�1, vi).
kX
i=1
vi.d
kX
i=1
vi�1.d+ w(vi�1, vi),
But
kX
i=1
vi.d =
kX
i=1
vi�1.d, implying
kX
i=1
w(vi�1, vi) � 0
contradicting to c being a negative cycle where
kP
i=1
w(vi�1, vi) < 0
Chapter 24. Single Source Shortest Paths
Finding shortest paths on DAGs (directed acyclic graphs)
• Algorithms can take the advantage of the non-cyclicity.
• How would your algorithm be?
topological order of vertices !
Chapter 24. Single Source Shortest Paths
Finding shortest paths on DAGs (directed acyclic graphs)
• Algorithms can take the advantage of the non-cyclicity.
• How would your algorithm be?
topological order of vertices !
Chapter 24. Single Source Shortest Paths
Finding shortest paths on DAGs (directed acyclic graphs)
• Algorithms can take the advantage of the non-cyclicity.
• How would your algorithm be?
topological order of vertices !
Chapter 24. Single Source Shortest Paths
Finding shortest paths on DAGs (directed acyclic graphs)
• Algorithms can take the advantage of the non-cyclicity.
• How would your algorithm be?
topological order of vertices !
Chapter 24. Single Source Shortest Paths
Dag-Shortest Paths(G,w, s)
1. topologically sort the vertices of G.V
2. for each vertex v 2 G.V
3. v.d =1
4. v.⇡ = NULL
5. s.d = 0
6. for each u 2 G.V , in the topologically sorted order
7. for each vertex v 2 Adj[u]
8. if v.d > u.d+ w(u, v)
9. v.d = u.d+ w(u, v)
10. v.⇡ = u
11. return (d,⇡)
• Should we improve lines 6-7?
• Running time: ?
Chapter 24. Single Source Shortest Paths
Dag-Shortest Paths(G,w, s)
1. topologically sort the vertices of G.V
2. for each vertex v 2 G.V
3. v.d =1
4. v.⇡ = NULL
5. s.d = 0
6. for each u 2 G.V , in the topologically sorted order
7. for each vertex v 2 Adj[u]
8. if v.d > u.d+ w(u, v)
9. v.d = u.d+ w(u, v)
10. v.⇡ = u
11. return (d,⇡)
• Should we improve lines 6-7?
• Running time: ?
Chapter 24. Single Source Shortest Paths
Dag-Shortest Paths(G,w, s)
1. topologically sort the vertices of G.V
2. for each vertex v 2 G.V
3. v.d =1
4. v.⇡ = NULL
5. s.d = 0
6. for each u 2 G.V , in the topologically sorted order
7. for each vertex v 2 Adj[u]
8. if v.d > u.d+ w(u, v)
9. v.d = u.d+ w(u, v)
10. v.⇡ = u
11. return (d,⇡)
• Should we improve lines 6-7?
• Running time: ?
Chapter 24. Single Source Shortest Paths
Dag-Shortest Paths(G,w, s)
1. topologically sort the vertices of G.V
2. for each vertex v 2 G.V
3. v.d =1
4. v.⇡ = NULL
5. s.d = 0
6. for each u 2 G.V , in the topologically sorted order
7. for each vertex v 2 Adj[u]
8. if v.d > u.d+ w(u, v)
9. v.d = u.d+ w(u, v)
10. v.⇡ = u
11. return (d,⇡)
• Should we improve lines 6-7?
• Running time: ?
Chapter 24. Single Source Shortest Paths
Dag-Shortest Paths(G,w, s)
1. topologically sort the vertices of G.V
2. for each vertex v 2 G.V
3. v.d =1
4. v.⇡ = NULL
5. s.d = 0
6. for each u 2 G.V , in the topologically sorted order
7. for each vertex v 2 Adj[u]
8. if v.d > u.d+ w(u, v)
9. v.d = u.d+ w(u, v)
10. v.⇡ = u
11. return (d,⇡)
• Should we improve lines 6-7?
• Running time: ?
Chapter 24. Single Source Shortest Paths
Dag-Shortest Paths(G,w, s)
1. topologically sort the vertices of G.V
2. for each vertex v 2 G.V
3. v.d =1
4. v.⇡ = NULL
5. s.d = 0
6. for each u 2 G.V , in the topologically sorted order
7. for each vertex v 2 Adj[u]
8. if v.d > u.d+ w(u, v)
9. v.d = u.d+ w(u, v)
10. v.⇡ = u
11. return (d,⇡)
• Should we improve lines 6-7?
• Running time: ?
Chapter 24. Single Source Shortest Paths
Dag-Shortest Paths(G,w, s)
1. topologically sort the vertices of G.V
2. for each vertex v 2 G.V
3. v.d =1
4. v.⇡ = NULL
5. s.d = 0
6. for each u 2 G.V , in the topologically sorted order
7. for each vertex v 2 Adj[u]
8. if v.d > u.d+ w(u, v)
9. v.d = u.d+ w(u, v)
10. v.⇡ = u
11. return (d,⇡)
• Should we improve lines 6-7?
• Running time: ?
Chapter 24. Single Source Shortest Paths
Dag-Shortest Paths(G,w, s)
1. topologically sort the vertices of G.V
2. for each vertex v 2 G.V
3. v.d =1
4. v.⇡ = NULL
5. s.d = 0
6. for each u 2 G.V , in the topologically sorted order
7. for each vertex v 2 Adj[u]
8. if v.d > u.d+ w(u, v)
9. v.d = u.d+ w(u, v)
10. v.⇡ = u
11. return (d,⇡)
• Should we improve lines 6-7?
• Running time: ?
Chapter 24. Single Source Shortest Paths
Dag-Shortest Paths(G,w, s)
1. topologically sort the vertices of G.V
2. for each vertex v 2 G.V
3. v.d =1
4. v.⇡ = NULL
5. s.d = 0
6. for each u 2 G.V , in the topologically sorted order
7. for each vertex v 2 Adj[u]
8. if v.d > u.d+ w(u, v)
9. v.d = u.d+ w(u, v)
10. v.⇡ = u
11. return (d,⇡)
• Should we improve lines 6-7?
• Running time: ?
Chapter 24. Single Source Shortest Paths
Dag-Shortest Paths(G,w, s)
1. topologically sort the vertices of G.V
2. for each vertex v 2 G.V
3. v.d =1
4. v.⇡ = NULL
5. s.d = 0
6. for each u 2 G.V , in the topologically sorted order
7. for each vertex v 2 Adj[u]
8. if v.d > u.d+ w(u, v)
9. v.d = u.d+ w(u, v)
10. v.⇡ = u
11. return (d,⇡)
• Should we improve lines 6-7?
• Running time: ?
Chapter 24. Single Source Shortest Paths
Dag-Shortest Paths(G,w, s)
1. topologically sort the vertices of G.V
2. for each vertex v 2 G.V
3. v.d =1
4. v.⇡ = NULL
5. s.d = 0
6. for each u 2 G.V , in the topologically sorted order
7. for each vertex v 2 Adj[u]
8. if v.d > u.d+ w(u, v)
9. v.d = u.d+ w(u, v)
10. v.⇡ = u
11. return (d,⇡)
• Should we improve lines 6-7?
• Running time: ?
Chapter 24. Single Source Shortest Paths
Dag-Shortest Paths(G,w, s)
1. topologically sort the vertices of G.V
2. for each vertex v 2 G.V
3. v.d =1
4. v.⇡ = NULL
5. s.d = 0
6. for each u 2 G.V , in the topologically sorted order
7. for each vertex v 2 Adj[u]
8. if v.d > u.d+ w(u, v)
9. v.d = u.d+ w(u, v)
10. v.⇡ = u
11. return (d,⇡)
• Should we improve lines 6-7?
• Running time: ?
Chapter 24. Single Source Shortest Paths
Dag-Shortest Paths(G,w, s)
1. topologically sort the vertices of G.V
2. for each vertex v 2 G.V
3. v.d =1
4. v.⇡ = NULL
5. s.d = 0
6. for each u 2 G.V , in the topologically sorted order
7. for each vertex v 2 Adj[u]
8. if v.d > u.d+ w(u, v)
9. v.d = u.d+ w(u, v)
10. v.⇡ = u
11. return (d,⇡)
• Should we improve lines 6-7?
• Running time: ?
Chapter 24. Single Source Shortest Paths
Dag-Shortest Paths(G,w, s)
1. topologically sort the vertices of G.V
2. for each vertex v 2 G.V
3. v.d =1
4. v.⇡ = NULL
5. s.d = 0
6. for each u 2 G.V , in the topologically sorted order
7. for each vertex v 2 Adj[u]
8. if v.d > u.d+ w(u, v)
9. v.d = u.d+ w(u, v)
10. v.⇡ = u
11. return (d,⇡)
• Should we improve lines 6-7?
• Running time: ?
Chapter 24. Single Source Shortest Paths
note: the root is s.
Chapter 24. Single Source Shortest Paths
Dijkstra’s algorithm
On weighted, directed graphs in which each edge has non-negative weight.
Dijkstra(G,w, s)
1. for each vertex v 2 G.V
2. v.d =1
3. v.⇡ = NULL
4. s.d = 0
5. S = ;
6. Q = G.V
7. while Q is not empty
8. u = Extract Min (Q)
9. S = S [ {u}
10. for each vertex v 2 Adj[u]
11. if v.d > u.d+ w(u, v)
12. v.d = u.d+ w(u, v)
13. v.⇡ = u
14. return (d,⇡)
Running time:?
Chapter 24. Single Source Shortest Paths
Dijkstra’s algorithm
On weighted, directed graphs in which each edge has non-negative weight.
Dijkstra(G,w, s)
1. for each vertex v 2 G.V
2. v.d =1
3. v.⇡ = NULL
4. s.d = 0
5. S = ;
6. Q = G.V
7. while Q is not empty
8. u = Extract Min (Q)
9. S = S [ {u}
10. for each vertex v 2 Adj[u]
11. if v.d > u.d+ w(u, v)
12. v.d = u.d+ w(u, v)
13. v.⇡ = u
14. return (d,⇡)
Running time:?
Chapter 24. Single Source Shortest Paths
Dijkstra’s algorithm
On weighted, directed graphs in which each edge has non-negative weight.
Dijkstra(G,w, s)
1. for each vertex v 2 G.V
2. v.d =1
3. v.⇡ = NULL
4. s.d = 0
5. S = ;
6. Q = G.V
7. while Q is not empty
8. u = Extract Min (Q)
9. S = S [ {u}
10. for each vertex v 2 Adj[u]
11. if v.d > u.d+ w(u, v)
12. v.d = u.d+ w(u, v)
13. v.⇡ = u
14. return (d,⇡)
Running time:?
Chapter 24. Single Source Shortest Paths
Dijkstra’s algorithm
On weighted, directed graphs in which each edge has non-negative weight.
Dijkstra(G,w, s)
1. for each vertex v 2 G.V
2. v.d =1
3. v.⇡ = NULL
4. s.d = 0
5. S = ;
6. Q = G.V
7. while Q is not empty
8. u = Extract Min (Q)
9. S = S [ {u}
10. for each vertex v 2 Adj[u]
11. if v.d > u.d+ w(u, v)
12. v.d = u.d+ w(u, v)
13. v.⇡ = u
14. return (d,⇡)
Running time:?
Chapter 24. Single Source Shortest Paths
Dijkstra’s algorithm
On weighted, directed graphs in which each edge has non-negative weight.
Dijkstra(G,w, s)
1. for each vertex v 2 G.V
2. v.d =1
3. v.⇡ = NULL
4. s.d = 0
5. S = ;
6. Q = G.V
7. while Q is not empty
8. u = Extract Min (Q)
9. S = S [ {u}
10. for each vertex v 2 Adj[u]
11. if v.d > u.d+ w(u, v)
12. v.d = u.d+ w(u, v)
13. v.⇡ = u
14. return (d,⇡)
Running time:?
Chapter 24. Single Source Shortest Paths
Dijkstra’s algorithm
On weighted, directed graphs in which each edge has non-negative weight.
Dijkstra(G,w, s)
1. for each vertex v 2 G.V
2. v.d =1
3. v.⇡ = NULL
4. s.d = 0
5. S = ;
6. Q = G.V
7. while Q is not empty
8. u = Extract Min (Q)
9. S = S [ {u}
10. for each vertex v 2 Adj[u]
11. if v.d > u.d+ w(u, v)
12. v.d = u.d+ w(u, v)
13. v.⇡ = u
14. return (d,⇡)
Running time:?
Chapter 24. Single Source Shortest Paths
Dijkstra’s algorithm
On weighted, directed graphs in which each edge has non-negative weight.
Dijkstra(G,w, s)
1. for each vertex v 2 G.V
2. v.d =1
3. v.⇡ = NULL
4. s.d = 0
5. S = ;
6. Q = G.V
7. while Q is not empty
8. u = Extract Min (Q)
9. S = S [ {u}
10. for each vertex v 2 Adj[u]
11. if v.d > u.d+ w(u, v)
12. v.d = u.d+ w(u, v)
13. v.⇡ = u
14. return (d,⇡)
Running time:?
Chapter 24. Single Source Shortest Paths
Dijkstra’s algorithm
On weighted, directed graphs in which each edge has non-negative weight.
Dijkstra(G,w, s)
1. for each vertex v 2 G.V
2. v.d =1
3. v.⇡ = NULL
4. s.d = 0
5. S = ;
6. Q = G.V
7. while Q is not empty
8. u = Extract Min (Q)
9. S = S [ {u}
10. for each vertex v 2 Adj[u]
11. if v.d > u.d+ w(u, v)
12. v.d = u.d+ w(u, v)
13. v.⇡ = u
14. return (d,⇡)
Running time:?
Chapter 24. Single Source Shortest Paths
Dijkstra’s algorithm
On weighted, directed graphs in which each edge has non-negative weight.
Dijkstra(G,w, s)
1. for each vertex v 2 G.V
2. v.d =1
3. v.⇡ = NULL
4. s.d = 0
5. S = ;
6. Q = G.V
7. while Q is not empty
8. u = Extract Min (Q)
9. S = S [ {u}
10. for each vertex v 2 Adj[u]
11. if v.d > u.d+ w(u, v)
12. v.d = u.d+ w(u, v)
13. v.⇡ = u
14. return (d,⇡)
Running time:?
Chapter 24. Single Source Shortest Paths
Dijkstra’s algorithm
On weighted, directed graphs in which each edge has non-negative weight.
Dijkstra(G,w, s)
1. for each vertex v 2 G.V
2. v.d =1
3. v.⇡ = NULL
4. s.d = 0
5. S = ;
6. Q = G.V
7. while Q is not empty
8. u = Extract Min (Q)
9. S = S [ {u}
10. for each vertex v 2 Adj[u]
11. if v.d > u.d+ w(u, v)
12. v.d = u.d+ w(u, v)
13. v.⇡ = u
14. return (d,⇡)
Running time:?
Chapter 24. Single Source Shortest Paths
Dijkstra’s algorithm
On weighted, directed graphs in which each edge has non-negative weight.
Dijkstra(G,w, s)
1. for each vertex v 2 G.V
2. v.d =1
3. v.⇡ = NULL
4. s.d = 0
5. S = ;
6. Q = G.V
7. while Q is not empty
8. u = Extract Min (Q)
9. S = S [ {u}
10. for each vertex v 2 Adj[u]
11. if v.d > u.d+ w(u, v)
12. v.d = u.d+ w(u, v)
13. v.⇡ = u
14. return (d,⇡)
Running time:?
Chapter 24. Single Source Shortest Paths
Dijkstra’s algorithm
On weighted, directed graphs in which each edge has non-negative weight.
Dijkstra(G,w, s)
1. for each vertex v 2 G.V
2. v.d =1
3. v.⇡ = NULL
4. s.d = 0
5. S = ;
6. Q = G.V
7. while Q is not empty
8. u = Extract Min (Q)
9. S = S [ {u}
10. for each vertex v 2 Adj[u]
11. if v.d > u.d+ w(u, v)
12. v.d = u.d+ w(u, v)
13. v.⇡ = u
14. return (d,⇡)
Running time:?
Chapter 24. Single Source Shortest Paths
Dijkstra’s algorithm
On weighted, directed graphs in which each edge has non-negative weight.
Dijkstra(G,w, s)
1. for each vertex v 2 G.V
2. v.d =1
3. v.⇡ = NULL
4. s.d = 0
5. S = ;
6. Q = G.V
7. while Q is not empty
8. u = Extract Min (Q)
9. S = S [ {u}
10. for each vertex v 2 Adj[u]
11. if v.d > u.d+ w(u, v)
12. v.d = u.d+ w(u, v)
13. v.⇡ = u
14. return (d,⇡)
Running time:?
Chapter 24. Single Source Shortest Paths
Dijkstra’s algorithm
On weighted, directed graphs in which each edge has non-negative weight.
Dijkstra(G,w, s)
1. for each vertex v 2 G.V
2. v.d =1
3. v.⇡ = NULL
4. s.d = 0
5. S = ;
6. Q = G.V
7. while Q is not empty
8. u = Extract Min (Q)
9. S = S [ {u}
10. for each vertex v 2 Adj[u]
11. if v.d > u.d+ w(u, v)
12. v.d = u.d+ w(u, v)
13. v.⇡ = u
14. return (d,⇡)
Running time:?
Chapter 24. Single Source Shortest Paths
Dijkstra’s algorithm
On weighted, directed graphs in which each edge has non-negative weight.
Dijkstra(G,w, s)
1. for each vertex v 2 G.V
2. v.d =1
3. v.⇡ = NULL
4. s.d = 0
5. S = ;
6. Q = G.V
7. while Q is not empty
8. u = Extract Min (Q)
9. S = S [ {u}
10. for each vertex v 2 Adj[u]
11. if v.d > u.d+ w(u, v)
12. v.d = u.d+ w(u, v)
13. v.⇡ = u
14. return (d,⇡)
Running time:?
Chapter 24. Single Source Shortest Paths
Dijkstra’s algorithm
On weighted, directed graphs in which each edge has non-negative weight.
Dijkstra(G,w, s)
1. for each vertex v 2 G.V
2. v.d =1
3. v.⇡ = NULL
4. s.d = 0
5. S = ;
6. Q = G.V
7. while Q is not empty
8. u = Extract Min (Q)
9. S = S [ {u}
10. for each vertex v 2 Adj[u]
11. if v.d > u.d+ w(u, v)
12. v.d = u.d+ w(u, v)
13. v.⇡ = u
14. return (d,⇡)
Running time:?
Chapter 24. Single Source Shortest Paths
Dijkstra’s algorithm
On weighted, directed graphs in which each edge has non-negative weight.
Dijkstra(G,w, s)
1. for each vertex v 2 G.V
2. v.d =1
3. v.⇡ = NULL
4. s.d = 0
5. S = ;
6. Q = G.V
7. while Q is not empty
8. u = Extract Min (Q)
9. S = S [ {u}
10. for each vertex v 2 Adj[u]
11. if v.d > u.d+ w(u, v)
12. v.d = u.d+ w(u, v)
13. v.⇡ = u
14. return (d,⇡)
Running time:?
Chapter 24. Single Source Shortest Paths
Note: the black-colored vertices are in set S.
Chapter 24. Single Source Shortest Paths
Correctness of algorithm Dijkstra
Theorem 24.6 Dijkstra’s algorithm, run on a weighted, directed graph G = (V,E)
with non-negative weight function w and source s, terminates with u.d = �(s, u) for
all vertices u 2 V .
Proof: We need to show the while loop has loop invariant:
u.d = �(s, u) for each u 2 S
Assume u to be the first such vertex that u.d > �(s, u) when it is being added to S
then there must be a shortest path p: s x! y u, for some x 2 S and some
y 62 S.
y.d = �(s, y) when u is being added to S. This is because x 2 S, x.d = �(s, x) when
x was added to S. Edge (x, y) was related at that time, and y.d = �(s, y) by
Convergence-property. So
When u was chosen, u.d y.d = �(s, y) �(s, u). Contradicts the choice of u. So
u.d = �(s, u) when it is being included to S.
Chapter 24. Single Source Shortest Paths
Correctness of algorithm Dijkstra
Theorem 24.6 Dijkstra’s algorithm, run on a weighted, directed graph G = (V,E)
with non-negative weight function w and source s, terminates with u.d = �(s, u) for
all vertices u 2 V .
Proof: We need to show the while loop has loop invariant:
u.d = �(s, u) for each u 2 S
Assume u to be the first such vertex that u.d > �(s, u) when it is being added to S
then there must be a shortest path p: s x! y u, for some x 2 S and some
y 62 S.
y.d = �(s, y) when u is being added to S. This is because x 2 S, x.d = �(s, x) when
x was added to S. Edge (x, y) was related at that time, and y.d = �(s, y) by
Convergence-property. So
When u was chosen, u.d y.d = �(s, y) �(s, u). Contradicts the choice of u. So
u.d = �(s, u) when it is being included to S.
Chapter 24. Single Source Shortest Paths
Correctness of algorithm Dijkstra
Theorem 24.6 Dijkstra’s algorithm, run on a weighted, directed graph G = (V,E)
with non-negative weight function w and source s, terminates with u.d = �(s, u) for
all vertices u 2 V .
Proof: We need to show the while loop has loop invariant:
u.d = �(s, u) for each u 2 S
Assume u to be the first such vertex that u.d > �(s, u) when it is being added to S
then there must be a shortest path p: s x! y u, for some x 2 S and some
y 62 S.
y.d = �(s, y) when u is being added to S. This is because x 2 S, x.d = �(s, x) when
x was added to S. Edge (x, y) was related at that time, and y.d = �(s, y) by
Convergence-property. So
When u was chosen, u.d y.d = �(s, y) �(s, u). Contradicts the choice of u. So
u.d = �(s, u) when it is being included to S.
Chapter 24. Single Source Shortest Paths
Correctness of algorithm Dijkstra
Theorem 24.6 Dijkstra’s algorithm, run on a weighted, directed graph G = (V,E)
with non-negative weight function w and source s, terminates with u.d = �(s, u) for
all vertices u 2 V .
Proof: We need to show the while loop has loop invariant:
u.d = �(s, u) for each u 2 S
Assume u to be the first such vertex that u.d > �(s, u) when it is being added to S
then there must be a shortest path p: s x! y u, for some x 2 S and some
y 62 S.
y.d = �(s, y) when u is being added to S. This is because x 2 S, x.d = �(s, x) when
x was added to S. Edge (x, y) was related at that time, and y.d = �(s, y) by
Convergence-property. So
When u was chosen, u.d y.d = �(s, y) �(s, u). Contradicts the choice of u. So
u.d = �(s, u) when it is being included to S.
Chapter 24. Single Source Shortest Paths
Correctness of algorithm Dijkstra
Theorem 24.6 Dijkstra’s algorithm, run on a weighted, directed graph G = (V,E)
with non-negative weight function w and source s, terminates with u.d = �(s, u) for
all vertices u 2 V .
Proof: We need to show the while loop has loop invariant:
u.d = �(s, u) for each u 2 S
Assume u to be the first such vertex that u.d > �(s, u) when it is being added to S
then there must be a shortest path p: s x! y u, for some x 2 S and some
y 62 S.
y.d = �(s, y) when u is being added to S. This is because x 2 S, x.d = �(s, x) when
x was added to S. Edge (x, y) was related at that time, and y.d = �(s, y) by
Convergence-property. So
When u was chosen, u.d y.d = �(s, y) �(s, u). Contradicts the choice of u. So
u.d = �(s, u) when it is being included to S.
Chapter 24. Single Source Shortest Paths
Correctness of algorithm Dijkstra
Theorem 24.6 Dijkstra’s algorithm, run on a weighted, directed graph G = (V,E)
with non-negative weight function w and source s, terminates with u.d = �(s, u) for
all vertices u 2 V .
Proof: We need to show the while loop has loop invariant:
u.d = �(s, u) for each u 2 S
Assume u to be the first such vertex that u.d > �(s, u) when it is being added to S
then there must be a shortest path p: s x! y u, for some x 2 S and some
y 62 S.
y.d = �(s, y) when u is being added to S. This is because x 2 S, x.d = �(s, x) when
x was added to S. Edge (x, y) was related at that time, and y.d = �(s, y) by
Convergence-property. So
When u was chosen, u.d y.d = �(s, y) �(s, u). Contradicts the choice of u. So
u.d = �(s, u) when it is being included to S.
Chapter 24. Single Source Shortest Paths
Correctness of algorithm Dijkstra
Theorem 24.6 Dijkstra’s algorithm, run on a weighted, directed graph G = (V,E)
with non-negative weight function w and source s, terminates with u.d = �(s, u) for
all vertices u 2 V .
Proof: We need to show the while loop has loop invariant:
u.d = �(s, u) for each u 2 S
Assume u to be the first such vertex that u.d > �(s, u) when it is being added to S
then there must be a shortest path p: s x! y u, for some x 2 S and some
y 62 S.
y.d = �(s, y) when u is being added to S. This is because x 2 S, x.d = �(s, x) when
x was added to S.
Edge (x, y) was related at that time, and y.d = �(s, y) by
Convergence-property. So
When u was chosen, u.d y.d = �(s, y) �(s, u). Contradicts the choice of u. So
u.d = �(s, u) when it is being included to S.
Chapter 24. Single Source Shortest Paths
Correctness of algorithm Dijkstra
Theorem 24.6 Dijkstra’s algorithm, run on a weighted, directed graph G = (V,E)
with non-negative weight function w and source s, terminates with u.d = �(s, u) for
all vertices u 2 V .
Proof: We need to show the while loop has loop invariant:
u.d = �(s, u) for each u 2 S
Assume u to be the first such vertex that u.d > �(s, u) when it is being added to S
then there must be a shortest path p: s x! y u, for some x 2 S and some
y 62 S.
y.d = �(s, y) when u is being added to S. This is because x 2 S, x.d = �(s, x) when
x was added to S. Edge (x, y) was related at that time, and y.d = �(s, y) by
Convergence-property.
So
When u was chosen, u.d y.d = �(s, y) �(s, u). Contradicts the choice of u. So
u.d = �(s, u) when it is being included to S.
Chapter 24. Single Source Shortest Paths
Correctness of algorithm Dijkstra
Theorem 24.6 Dijkstra’s algorithm, run on a weighted, directed graph G = (V,E)
with non-negative weight function w and source s, terminates with u.d = �(s, u) for
all vertices u 2 V .
Proof: We need to show the while loop has loop invariant:
u.d = �(s, u) for each u 2 S
Assume u to be the first such vertex that u.d > �(s, u) when it is being added to S
then there must be a shortest path p: s x! y u, for some x 2 S and some
y 62 S.
y.d = �(s, y) when u is being added to S. This is because x 2 S, x.d = �(s, x) when
x was added to S. Edge (x, y) was related at that time, and y.d = �(s, y) by
Convergence-property. So
When u was chosen, u.d y.d = �(s, y) �(s, u). Contradicts the choice of u. So
u.d = �(s, u) when it is being included to S.
Chapter 24. Single Source Shortest Paths
Correctness of algorithm Dijkstra
Theorem 24.6 Dijkstra’s algorithm, run on a weighted, directed graph G = (V,E)
with non-negative weight function w and source s, terminates with u.d = �(s, u) for
all vertices u 2 V .
Proof: We need to show the while loop has loop invariant:
u.d = �(s, u) for each u 2 S
Assume u to be the first such vertex that u.d > �(s, u) when it is being added to S
then there must be a shortest path p: s x! y u, for some x 2 S and some
y 62 S.
y.d = �(s, y) when u is being added to S. This is because x 2 S, x.d = �(s, x) when
x was added to S. Edge (x, y) was related at that time, and y.d = �(s, y) by
Convergence-property. So
When u was chosen, u.d y.d = �(s, y) �(s, u). Contradicts the choice of u.
So
u.d = �(s, u) when it is being included to S.
Chapter 24. Single Source Shortest Paths
Correctness of algorithm Dijkstra
Theorem 24.6 Dijkstra’s algorithm, run on a weighted, directed graph G = (V,E)
with non-negative weight function w and source s, terminates with u.d = �(s, u) for
all vertices u 2 V .
Proof: We need to show the while loop has loop invariant:
u.d = �(s, u) for each u 2 S
Assume u to be the first such vertex that u.d > �(s, u) when it is being added to S
then there must be a shortest path p: s x! y u, for some x 2 S and some
y 62 S.
y.d = �(s, y) when u is being added to S. This is because x 2 S, x.d = �(s, x) when
x was added to S. Edge (x, y) was related at that time, and y.d = �(s, y) by
Convergence-property. So
When u was chosen, u.d y.d = �(s, y) �(s, u). Contradicts the choice of u. So
u.d = �(s, u) when it is being included to S.
Chapter 24. Single Source Shortest Paths
• Running time of Dijkstra?
• Can Dijkstra deals with negative edges or cycles?
• Fundamental di↵erences between Bellman-Ford and Dijkstra?
Chapter 24. Single Source Shortest Paths
• Running time of Dijkstra?
• Can Dijkstra deals with negative edges or cycles?
• Fundamental di↵erences between Bellman-Ford and Dijkstra?
Chapter 24. Single Source Shortest Paths
• Running time of Dijkstra?
• Can Dijkstra deals with negative edges or cycles?
• Fundamental di↵erences between Bellman-Ford and Dijkstra?
Chapter 24. Single Source Shortest Paths
• Running time of Dijkstra?
• Can Dijkstra deals with negative edges or cycles?
• Fundamental di↵erences between Bellman-Ford and Dijkstra?
Chapter 24. Single Source Shortest Paths
• Fundamental di↵erences between Dijkstra and MST-Prim ?
Chapter 24. Single Source Shortest Paths
• Fundamental di↵erences between Dijkstra and MST-Prim ?
Chapter 24. Single Source Shortest Paths
• Fundamental di↵erences between Dijkstra and MST-Prim ?
Chapter 25. All-pairs shortest paths
Chapter 25. All-pairs shortest paths
All Pair Shortest Paths Problem
Input: A weighted graph G = (V,E) with edge weight function w;
Output: Shortest paths between every pair of vertices in G.
• Dijkstra would run in time O(|V |2 log |V |+ |V ||E|) on non-negative edges
• Bellman-Ford would run in time O(|V |2|E|) for general graphs, but
O(|V |4) on ”dense” graphs
New algorithms
• A dynamic programming algorithm O(|V |4), improved to O(|V |3 log |V |)
• Floyd-Warshall algorithm: O(|V |3).
Graph representation: adjacency matrix W = (wij)
Chapter 25. All-pairs shortest paths
Chapter 25. All-pairs shortest paths
All Pair Shortest Paths Problem
Input: A weighted graph G = (V,E) with edge weight function w;
Output: Shortest paths between every pair of vertices in G.
• Dijkstra would run in time O(|V |2 log |V |+ |V ||E|) on non-negative edges
• Bellman-Ford would run in time O(|V |2|E|) for general graphs, but
O(|V |4) on ”dense” graphs
New algorithms
• A dynamic programming algorithm O(|V |4), improved to O(|V |3 log |V |)
• Floyd-Warshall algorithm: O(|V |3).
Graph representation: adjacency matrix W = (wij)
Chapter 25. All-pairs shortest paths
Chapter 25. All-pairs shortest paths
All Pair Shortest Paths Problem
Input: A weighted graph G = (V,E) with edge weight function w;
Output: Shortest paths between every pair of vertices in G.
• Dijkstra would run in time O(|V |2 log |V |+ |V ||E|) on non-negative edges
• Bellman-Ford would run in time O(|V |2|E|) for general graphs, but
O(|V |4) on ”dense” graphs
New algorithms
• A dynamic programming algorithm O(|V |4), improved to O(|V |3 log |V |)
• Floyd-Warshall algorithm: O(|V |3).
Graph representation: adjacency matrix W = (wij)
Chapter 25. All-pairs shortest paths
Chapter 25. All-pairs shortest paths
All Pair Shortest Paths Problem
Input: A weighted graph G = (V,E) with edge weight function w;
Output: Shortest paths between every pair of vertices in G.
• Dijkstra would run in time O(|V |2 log |V |+ |V ||E|) on non-negative edges
• Bellman-Ford would run in time O(|V |2|E|) for general graphs, but
O(|V |4) on ”dense” graphs
New algorithms
• A dynamic programming algorithm O(|V |4), improved to O(|V |3 log |V |)
• Floyd-Warshall algorithm: O(|V |3).
Graph representation: adjacency matrix W = (wij)
Chapter 25. All-pairs shortest paths
Chapter 25. All-pairs shortest paths
All Pair Shortest Paths Problem
Input: A weighted graph G = (V,E) with edge weight function w;
Output: Shortest paths between every pair of vertices in G.
• Dijkstra would run in time O(|V |2 log |V |+ |V ||E|) on non-negative edges
• Bellman-Ford would run in time O(|V |2|E|) for general graphs, but
O(|V |4) on ”dense” graphs
New algorithms
• A dynamic programming algorithm O(|V |4), improved to O(|V |3 log |V |)
• Floyd-Warshall algorithm: O(|V |3).
Graph representation: adjacency matrix W = (wij)
Chapter 25. All-pairs shortest paths
Chapter 25. All-pairs shortest paths
All Pair Shortest Paths Problem
Input: A weighted graph G = (V,E) with edge weight function w;
Output: Shortest paths between every pair of vertices in G.
• Dijkstra would run in time O(|V |2 log |V |+ |V ||E|) on non-negative edges
• Bellman-Ford would run in time O(|V |2|E|) for general graphs, but
O(|V |4) on ”dense” graphs
New algorithms
• A dynamic programming algorithm O(|V |4), improved to O(|V |3 log |V |)
• Floyd-Warshall algorithm: O(|V |3).
Graph representation: adjacency matrix W = (wij)
Chapter 25. All-pairs shortest paths
Chapter 25. All-pairs shortest paths
All Pair Shortest Paths Problem
Input: A weighted graph G = (V,E) with edge weight function w;
Output: Shortest paths between every pair of vertices in G.
• Dijkstra would run in time O(|V |2 log |V |+ |V ||E|) on non-negative edges
• Bellman-Ford would run in time O(|V |2|E|) for general graphs, but
O(|V |4) on ”dense” graphs
New algorithms
• A dynamic programming algorithm O(|V |4), improved to O(|V |3 log |V |)
• Floyd-Warshall algorithm: O(|V |3).
Graph representation: adjacency matrix W = (wij)
Chapter 25. All-pairs shortest paths
Chapter 25. All-pairs shortest paths
All Pair Shortest Paths Problem
Input: A weighted graph G = (V,E) with edge weight function w;
Output: Shortest paths between every pair of vertices in G.
• Dijkstra would run in time O(|V |2 log |V |+ |V ||E|) on non-negative edges
• Bellman-Ford would run in time O(|V |2|E|) for general graphs, but
O(|V |4) on ”dense” graphs
New algorithms
• A dynamic programming algorithm O(|V |4), improved to O(|V |3 log |V |)
• Floyd-Warshall algorithm: O(|V |3).
Graph representation: adjacency matrix W = (wij)
Chapter 25. All-pairs shortest paths
Chapter 25. All-pairs shortest paths
All Pair Shortest Paths Problem
Input: A weighted graph G = (V,E) with edge weight function w;
Output: Shortest paths between every pair of vertices in G.
• Dijkstra would run in time O(|V |2 log |V |+ |V ||E|) on non-negative edges
• Bellman-Ford would run in time O(|V |2|E|) for general graphs, but
O(|V |4) on ”dense” graphs
New algorithms
• A dynamic programming algorithm O(|V |4), improved to O(|V |3 log |V |)
• Floyd-Warshall algorithm: O(|V |3).
Graph representation: adjacency matrix W = (wij)
Chapter 25. All-pairs shortest paths
Chapter 25. All-pairs shortest paths
All Pair Shortest Paths Problem
Input: A weighted graph G = (V,E) with edge weight function w;
Output: Shortest paths between every pair of vertices in G.
• Dijkstra would run in time O(|V |2 log |V |+ |V ||E|) on non-negative edges
• Bellman-Ford would run in time O(|V |2|E|) for general graphs, but
O(|V |4) on ”dense” graphs
New algorithms
• A dynamic programming algorithm O(|V |4), improved to O(|V |3 log |V |)
• Floyd-Warshall algorithm: O(|V |3).
Graph representation: adjacency matrix W = (wij)
Chapter 25. All-pairs shortest paths
A dynamic programming approach
• Optimal substructure
• Objective function
Define lij be the minimum weight of any path from vi to vj
does not work! having a data dependency issue.
Define lmij be the minimum weight of any path from vi to vj that
contains at most m edges.
or alternatively,
Define lkij be the minimum weight of any path from vi to vj in which
intermediate vertices have indexes k.
Chapter 25. All-pairs shortest paths
A dynamic programming approach
• Optimal substructure
• Objective function
Define lij be the minimum weight of any path from vi to vj
does not work! having a data dependency issue.
Define lmij be the minimum weight of any path from vi to vj that
contains at most m edges.
or alternatively,
Define lkij be the minimum weight of any path from vi to vj in which
intermediate vertices have indexes k.
Chapter 25. All-pairs shortest paths
A dynamic programming approach
• Optimal substructure
• Objective function
Define lij be the minimum weight of any path from vi to vj
does not work! having a data dependency issue.
Define lmij be the minimum weight of any path from vi to vj that
contains at most m edges.
or alternatively,
Define lkij be the minimum weight of any path from vi to vj in which
intermediate vertices have indexes k.
Chapter 25. All-pairs shortest paths
A dynamic programming approach
• Optimal substructure
• Objective function
Define lij be the minimum weight of any path from vi to vj
does not work! having a data dependency issue.
Define lmij be the minimum weight of any path from vi to vj that
contains at most m edges.
or alternatively,
Define lkij be the minimum weight of any path from vi to vj in which
intermediate vertices have indexes k.
Chapter 25. All-pairs shortest paths
A dynamic programming approach
• Optimal substructure
• Objective function
Define lij be the minimum weight of any path from vi to vj
does not work! having a data dependency issue.
Define lmij be the minimum weight of any path from vi to vj that
contains at most m edges.
or alternatively,
Define lkij be the minimum weight of any path from vi to vj in which
intermediate vertices have indexes k.
Chapter 25. All-pairs shortest paths
A dynamic programming approach
• Optimal substructure
• Objective function
Define lij be the minimum weight of any path from vi to vj
does not work! having a data dependency issue.
Define lmij be the minimum weight of any path from vi to vj that
contains at most m edges.
or alternatively,
Define lkij be the minimum weight of any path from vi to vj in which
intermediate vertices have indexes k.
Chapter 25. All-pairs shortest paths
A dynamic programming approach
• Optimal substructure
• Objective function
Define lij be the minimum weight of any path from vi to vj
does not work! having a data dependency issue.
Define lmij be the minimum weight of any path from vi to vj that
contains at most m edges.
or alternatively,
Define lkij be the minimum weight of any path from vi to vj in which
intermediate vertices have indexes k.
Chapter 25. All-pairs shortest paths
Define lmij be the minimum weight of any path from vi to vj that
contains at most m edges.
lmij = min(l
m�1
ij , min
1kn
{lm�1ik + wkj})
If wjj = 0, we can rewrite
lmij = min
1kn
{lm�1ik + wkj}
and base cases:
l1ij = wij
Adjacency matrix W = (wij) is the default.
Chapter 25. All-pairs shortest paths
Define lmij be the minimum weight of any path from vi to vj that
contains at most m edges.
lmij = min(l
m�1
ij , min
1kn
{lm�1ik + wkj})
If wjj = 0, we can rewrite
lmij = min
1kn
{lm�1ik + wkj}
and base cases:
l1ij = wij
Adjacency matrix W = (wij) is the default.
Chapter 25. All-pairs shortest paths
Define lmij be the minimum weight of any path from vi to vj that
contains at most m edges.
lmij = min(l
m�1
ij , min
1kn
{lm�1ik + wkj})
If wjj = 0, we can rewrite
lmij = min
1kn
{lm�1ik + wkj}
and base cases:
l1ij = wij
Adjacency matrix W = (wij) is the default.
Chapter 25. All-pairs shortest paths
Define lmij be the minimum weight of any path from vi to vj that
contains at most m edges.
lmij = min(l
m�1
ij , min
1kn
{lm�1ik + wkj})
If wjj = 0, we can rewrite
lmij = min
1kn
{lm�1ik + wkj}
and base cases:
l1ij = wij
Adjacency matrix W = (wij) is the default.
Chapter 25. All-pairs shortest paths
Define lmij be the minimum weight of any path from vi to vj that
contains at most m edges.
lmij = min(l
m�1
ij , min
1kn
{lm�1ik + wkj})
If wjj = 0, we can rewrite
lmij = min
1kn
{lm�1ik + wkj}
and base cases:
l1ij = wij
Adjacency matrix W = (wij) is the default.
Chapter 25. All-pairs shortest paths
DP table filling algorithm:
For L1 = W and m = 2, . . . , n� 1, compute table Lm from table Lm�1;
technically two tables are enough.
Extended Shortest Paths(L,W )
1. n = rows[L];
2. let L0 be an n⇥ n table;
3. for i = 1 to n
4. for j = 1 to n
5. L0[i, j] =1 (L0[i, j] = L[i, j] in case wa,a 6= 0)
6. for k = 1 to n
7. L0[i, j] = min{L0[i, j], L[i, k] + w[k, j]}
8. return ( L0)
Call Extended Shortest Paths for m = 2, 3, . . . , n� 1
Lm Extended Shortest Paths(Lm�1,W )
Chapter 25. All-pairs shortest paths
DP table filling algorithm:
For L1 = W and m = 2, . . . , n� 1, compute table Lm from table Lm�1;
technically two tables are enough.
Extended Shortest Paths(L,W )
1. n = rows[L];
2. let L0 be an n⇥ n table;
3. for i = 1 to n
4. for j = 1 to n
5. L0[i, j] =1 (L0[i, j] = L[i, j] in case wa,a 6= 0)
6. for k = 1 to n
7. L0[i, j] = min{L0[i, j], L[i, k] + w[k, j]}
8. return ( L0)
Call Extended Shortest Paths for m = 2, 3, . . . , n� 1
Lm Extended Shortest Paths(Lm�1,W )
Chapter 25. All-pairs shortest paths
DP table filling algorithm:
For L1 = W and m = 2, . . . , n� 1, compute table Lm from table Lm�1;
technically two tables are enough.
Extended Shortest Paths(L,W )
1. n = rows[L];
2. let L0 be an n⇥ n table;
3. for i = 1 to n
4. for j = 1 to n
5. L0[i, j] =1 (L0[i, j] = L[i, j] in case wa,a 6= 0)
6. for k = 1 to n
7. L0[i, j] = min{L0[i, j], L[i, k] + w[k, j]}
8. return ( L0)
Call Extended Shortest Paths for m = 2, 3, . . . , n� 1
Lm Extended Shortest Paths(Lm�1,W )
Chapter 25. All-pairs shortest paths
DP table filling algorithm:
For L1 = W and m = 2, . . . , n� 1, compute table Lm from table Lm�1;
technically two tables are enough.
Extended Shortest Paths(L,W )
1. n = rows[L];
2. let L0 be an n⇥ n table;
3. for i = 1 to n
4. for j = 1 to n
5. L0[i, j] =1 (L0[i, j] = L[i, j] in case wa,a 6= 0)
6. for k = 1 to n
7. L0[i, j] = min{L0[i, j], L[i, k] + w[k, j]}
8. return ( L0)
Call Extended Shortest Paths for m = 2, 3, . . . , n� 1
Lm Extended Shortest Paths(Lm�1,W )
Chapter 25. All-pairs shortest paths
DP table filling algorithm:
For L1 = W and m = 2, . . . , n� 1, compute table Lm from table Lm�1;
technically two tables are enough.
Extended Shortest Paths(L,W )
1. n = rows[L];
2. let L0 be an n⇥ n table;
3. for i = 1 to n
4. for j = 1 to n
5. L0[i, j] =1 (L0[i, j] = L[i, j] in case wa,a 6= 0)
6. for k = 1 to n
7. L0[i, j] = min{L0[i, j], L[i, k] + w[k, j]}
8. return ( L0)
Call Extended Shortest Paths for m = 2, 3, . . . , n� 1
Lm Extended Shortest Paths(Lm�1,W )
Chapter 25. All-pairs shortest paths
DP table filling algorithm:
For L1 = W and m = 2, . . . , n� 1, compute table Lm from table Lm�1;
technically two tables are enough.
Extended Shortest Paths(L,W )
1. n = rows[L];
2. let L0 be an n⇥ n table;
3. for i = 1 to n
4. for j = 1 to n
5. L0[i, j] =1 (L0[i, j] = L[i, j] in case wa,a 6= 0)
6. for k = 1 to n
7. L0[i, j] = min{L0[i, j], L[i, k] + w[k, j]}
8. return ( L0)
Call Extended Shortest Paths for m = 2, 3, . . . , n� 1
Lm Extended Shortest Paths(Lm�1,W )
Chapter 25. All-pairs shortest paths
DP table filling algorithm:
For L1 = W and m = 2, . . . , n� 1, compute table Lm from table Lm�1;
technically two tables are enough.
Extended Shortest Paths(L,W )
1. n = rows[L];
2. let L0 be an n⇥ n table;
3. for i = 1 to n
4. for j = 1 to n
5. L0[i, j] =1 (L0[i, j] = L[i, j] in case wa,a 6= 0)
6. for k = 1 to n
7. L0[i, j] = min{L0[i, j], L[i, k] + w[k, j]}
8. return ( L0)
Call Extended Shortest Paths for m = 2, 3, . . . , n� 1
Lm Extended Shortest Paths(Lm�1,W )
Chapter 25. All-pairs shortest paths
DP table filling algorithm:
For L1 = W and m = 2, . . . , n� 1, compute table Lm from table Lm�1;
technically two tables are enough.
Extended Shortest Paths(L,W )
1. n = rows[L];
2. let L0 be an n⇥ n table;
3. for i = 1 to n
4. for j = 1 to n
5. L0[i, j] =1 (L0[i, j] = L[i, j] in case wa,a 6= 0)
6. for k = 1 to n
7. L0[i, j] = min{L0[i, j], L[i, k] + w[k, j]}
8. return ( L0)
Call Extended Shortest Paths for m = 2, 3, . . . , n� 1
Lm Extended Shortest Paths(Lm�1,W )
Chapter 25. All-pairs shortest paths
DP table filling algorithm:
For L1 = W and m = 2, . . . , n� 1, compute table Lm from table Lm�1;
technically two tables are enough.
Extended Shortest Paths(L,W )
1. n = rows[L];
2. let L0 be an n⇥ n table;
3. for i = 1 to n
4. for j = 1 to n
5. L0[i, j] =1 (L0[i, j] = L[i, j] in case wa,a 6= 0)
6. for k = 1 to n
7. L0[i, j] = min{L0[i, j], L[i, k] + w[k, j]}
8. return ( L0)
Call Extended Shortest Paths for m = 2, 3, . . . , n� 1
Lm Extended Shortest Paths(Lm�1,W )
Chapter 25. All-pairs shortest paths
DP table filling algorithm:
For L1 = W and m = 2, . . . , n� 1, compute table Lm from table Lm�1;
technically two tables are enough.
Extended Shortest Paths(L,W )
1. n = rows[L];
2. let L0 be an n⇥ n table;
3. for i = 1 to n
4. for j = 1 to n
5. L0[i, j] =1 (L0[i, j] = L[i, j] in case wa,a 6= 0)
6. for k = 1 to n
7. L0[i, j] = min{L0[i, j], L[i, k] + w[k, j]}
8. return ( L0)
Call Extended Shortest Paths for m = 2, 3, . . . , n� 1
Lm Extended Shortest Paths(Lm�1,W )
Chapter 25. All-pairs shortest paths
DP table filling algorithm:
For L1 = W and m = 2, . . . , n� 1, compute table Lm from table Lm�1;
technically two tables are enough.
Extended Shortest Paths(L,W )
1. n = rows[L];
2. let L0 be an n⇥ n table;
3. for i = 1 to n
4. for j = 1 to n
5. L0[i, j] =1 (L0[i, j] = L[i, j] in case wa,a 6= 0)
6. for k = 1 to n
7. L0[i, j] = min{L0[i, j], L[i, k] + w[k, j]}
8. return ( L0)
Call Extended Shortest Paths for m = 2, 3, . . . , n� 1
Lm Extended Shortest Paths(Lm�1,W )
Chapter 25. All-pairs shortest paths
DP table filling algorithm:
For L1 = W and m = 2, . . . , n� 1, compute table Lm from table Lm�1;
technically two tables are enough.
Extended Shortest Paths(L,W )
1. n = rows[L];
2. let L0 be an n⇥ n table;
3. for i = 1 to n
4. for j = 1 to n
5. L0[i, j] =1 (L0[i, j] = L[i, j] in case wa,a 6= 0)
6. for k = 1 to n
7. L0[i, j] = min{L0[i, j], L[i, k] + w[k, j]}
8. return ( L0)
Call Extended Shortest Paths for m = 2, 3, . . . , n� 1
Lm Extended Shortest Paths(Lm�1,W )
Chapter 25. All-pairs shortest paths
DP table filling algorithm:
For L1 = W and m = 2, . . . , n� 1, compute table Lm from table Lm�1;
technically two tables are enough.
Extended Shortest Paths(L,W )
1. n = rows[L];
2. let L0 be an n⇥ n table;
3. for i = 1 to n
4. for j = 1 to n
5. L0[i, j] =1 (L0[i, j] = L[i, j] in case wa,a 6= 0)
6. for k = 1 to n
7. L0[i, j] = min{L0[i, j], L[i, k] + w[k, j]}
8. return ( L0)
Call Extended Shortest Paths for m = 2, 3, . . . , n� 1
Lm Extended Shortest Paths(Lm�1,W )
Chapter 25. All-pairs shortest paths
DP table filling algorithm:
For L1 = W and m = 2, . . . , n� 1, compute table Lm from table Lm�1;
technically two tables are enough.
Extended Shortest Paths(L,W )
1. n = rows[L];
2. let L0 be an n⇥ n table;
3. for i = 1 to n
4. for j = 1 to n
5. L0[i, j] =1 (L0[i, j] = L[i, j] in case wa,a 6= 0)
6. for k = 1 to n
7. L0[i, j] = min{L0[i, j], L[i, k] + w[k, j]}
8. return ( L0)
Call Extended Shortest Paths for m = 2, 3, . . . , n� 1
Lm Extended Shortest Paths(Lm�1,W )
Chapter 25. All-pairs shortest paths
Running on an example:
W = L1 = the first matrix.
l
2
0,0 = min
8
>>><
>>>:
l10,0 value = 8
l10,0 + l
1
0,0 k = 0, value = 8 + 8 = 16
l10,1 + l
1
1,0 k = 1, value = 1 + 6 = 7
l10,2 + l
1
2,0 k = 2, value = 1 + 3 = 4
⇤
Chapter 25. All-pairs shortest paths
Running on an example:
W = L1 = the first matrix.
l
2
0,0 = min
8
>>><
>>>:
l10,0 value = 8
l10,0 + l
1
0,0 k = 0, value = 8 + 8 = 16
l10,1 + l
1
1,0 k = 1, value = 1 + 6 = 7
l10,2 + l
1
2,0 k = 2, value = 1 + 3 = 4
⇤
Chapter 25. All-pairs shortest paths
• running time: ⇥(n4).
• improving the running time by repeatedly squaring:
compute: L1, L2, L4, · · · , L2
k
.
what is k here? Let 2k = n� 1. Then k = dlog2(n� 1)e.
Faster All Pair Shortest Paths(W )
1. n = rows[W ];
2. L = W ;
3. m = 1;
4. while m < n� 1
5 L = Extended Shortest Paths(L,L)
6. m = 2⇥m
7. return (L)
Chapter 25. All-pairs shortest paths
• running time: ⇥(n4).
• improving the running time by repeatedly squaring:
compute: L1, L2, L4, · · · , L2
k
.
what is k here? Let 2k = n� 1. Then k = dlog2(n� 1)e.
Faster All Pair Shortest Paths(W )
1. n = rows[W ];
2. L = W ;
3. m = 1;
4. while m < n� 1
5 L = Extended Shortest Paths(L,L)
6. m = 2⇥m
7. return (L)
Chapter 25. All-pairs shortest paths
• running time: ⇥(n4).
• improving the running time by repeatedly squaring:
compute: L1, L2, L4, · · · , L2
k
.
what is k here? Let 2k = n� 1. Then k = dlog2(n� 1)e.
Faster All Pair Shortest Paths(W )
1. n = rows[W ];
2. L = W ;
3. m = 1;
4. while m < n� 1
5 L = Extended Shortest Paths(L,L)
6. m = 2⇥m
7. return (L)
Chapter 25. All-pairs shortest paths
• running time: ⇥(n4).
• improving the running time by repeatedly squaring:
compute: L1, L2, L4, · · · , L2
k
.
what is k here?
Let 2
k
= n� 1. Then k = dlog2(n� 1)e.
Faster All Pair Shortest Paths(W )
1. n = rows[W ];
2. L = W ;
3. m = 1;
4. while m < n� 1
5 L = Extended Shortest Paths(L,L)
6. m = 2⇥m
7. return (L)
Chapter 25. All-pairs shortest paths
• running time: ⇥(n4).
• improving the running time by repeatedly squaring:
compute: L1, L2, L4, · · · , L2
k
.
what is k here? Let 2k = n� 1. Then k = dlog2(n� 1)e.
Faster All Pair Shortest Paths(W )
1. n = rows[W ];
2. L = W ;
3. m = 1;
4. while m < n� 1
5 L = Extended Shortest Paths(L,L)
6. m = 2⇥m
7. return (L)
Chapter 25. All-pairs shortest paths
• running time: ⇥(n4).
• improving the running time by repeatedly squaring:
compute: L1, L2, L4, · · · , L2
k
.
what is k here? Let 2k = n� 1. Then k = dlog2(n� 1)e.
Faster All Pair Shortest Paths(W )
1. n = rows[W ];
2. L = W ;
3. m = 1;
4. while m < n� 1
5 L = Extended Shortest Paths(L,L)
6. m = 2⇥m
7. return (L)
Chapter 25. All-pairs shortest paths
• running time: ⇥(n4).
• improving the running time by repeatedly squaring:
compute: L1, L2, L4, · · · , L2
k
.
what is k here? Let 2k = n� 1. Then k = dlog2(n� 1)e.
Faster All Pair Shortest Paths(W )
1. n = rows[W ];
2. L = W ;
3. m = 1;
4. while m < n� 1
5 L = Extended Shortest Paths(L,L)
6. m = 2⇥m
7. return (L)
Chapter 25. All-pairs shortest paths
• running time: ⇥(n4).
• improving the running time by repeatedly squaring:
compute: L1, L2, L4, · · · , L2
k
.
what is k here? Let 2k = n� 1. Then k = dlog2(n� 1)e.
Faster All Pair Shortest Paths(W )
1. n = rows[W ];
2. L = W ;
3. m = 1;
4. while m < n� 1
5 L = Extended Shortest Paths(L,L)
6. m = 2⇥m
7. return (L)
Chapter 25. All-pairs shortest paths
• running time: ⇥(n4).
• improving the running time by repeatedly squaring:
compute: L1, L2, L4, · · · , L2
k
.
what is k here? Let 2k = n� 1. Then k = dlog2(n� 1)e.
Faster All Pair Shortest Paths(W )
1. n = rows[W ];
2. L = W ;
3. m = 1;
4. while m < n� 1
5 L = Extended Shortest Paths(L,L)
6. m = 2⇥m
7. return (L)
Chapter 25. All-pairs shortest paths
• running time: ⇥(n4).
• improving the running time by repeatedly squaring:
compute: L1, L2, L4, · · · , L2
k
.
what is k here? Let 2k = n� 1. Then k = dlog2(n� 1)e.
Faster All Pair Shortest Paths(W )
1. n = rows[W ];
2. L = W ;
3. m = 1;
4. while m < n� 1
5 L = Extended Shortest Paths(L,L)
6. m = 2⇥m
7. return (L)
Chapter 25. All-pairs shortest paths
• running time: ⇥(n4).
• improving the running time by repeatedly squaring:
compute: L1, L2, L4, · · · , L2
k
.
what is k here? Let 2k = n� 1. Then k = dlog2(n� 1)e.
Faster All Pair Shortest Paths(W )
1. n = rows[W ];
2. L = W ;
3. m = 1;
4. while m < n� 1
5 L = Extended Shortest Paths(L,L)
6. m = 2⇥m
7. return (L)
Chapter 25. All-pairs shortest paths
• running time: ⇥(n4).
• improving the running time by repeatedly squaring:
compute: L1, L2, L4, · · · , L2
k
.
what is k here? Let 2k = n� 1. Then k = dlog2(n� 1)e.
Faster All Pair Shortest Paths(W )
1. n = rows[W ];
2. L = W ;
3. m = 1;
4. while m < n� 1
5 L = Extended Shortest Paths(L,L)
6. m = 2⇥m
7. return (L)
Chapter 25. All-pairs shortest paths
• running time: ⇥(n4).
• improving the running time by repeatedly squaring:
compute: L1, L2, L4, · · · , L2
k
.
what is k here? Let 2k = n� 1. Then k = dlog2(n� 1)e.
Faster All Pair Shortest Paths(W )
1. n = rows[W ];
2. L = W ;
3. m = 1;
4. while m < n� 1
5 L = Extended Shortest Paths(L,L)
6. m = 2⇥m
7. return (L)
Chapter 25. All-pairs shortest paths
Floyd-Warshall algorithm
intermediate vertices on a path vi vj : those other than vi and vj .
Define: d
(k)
ij to be the shortest path distance from vi to vj
with no intermediate vertices of indexes higher than k. Thus
d
(k)
ij = min(d
(k�1)
ij , d
(k�1)
ik + d
(k�1)
kj )
with base case: d
(0)
ij = wij .
Floyd-Warshall(W )
1. n = rows[W ]
2. D(0) = W
3. for k = 1 to n
4. for i = 1 to n
5. for j = 1 to n
6. D(k)[i, j] = min{D(k�1)[i, j], D(k�1)[i, k] +D(k�1)[k, j]}
7. return (D(n))
Chapter 25. All-pairs shortest paths
Floyd-Warshall algorithm
intermediate vertices on a path vi vj : those other than vi and vj .
Define: d
(k)
ij to be the shortest path distance from vi to vj
with no intermediate vertices of indexes higher than k. Thus
d
(k)
ij = min(d
(k�1)
ij , d
(k�1)
ik + d
(k�1)
kj )
with base case: d
(0)
ij = wij .
Floyd-Warshall(W )
1. n = rows[W ]
2. D(0) = W
3. for k = 1 to n
4. for i = 1 to n
5. for j = 1 to n
6. D(k)[i, j] = min{D(k�1)[i, j], D(k�1)[i, k] +D(k�1)[k, j]}
7. return (D(n))
Chapter 25. All-pairs shortest paths
Floyd-Warshall algorithm
intermediate vertices on a path vi vj : those other than vi and vj .
Define: d
(k)
ij to be the shortest path distance from vi to vj
with no intermediate vertices of indexes higher than k. Thus
d
(k)
ij = min(d
(k�1)
ij , d
(k�1)
ik + d
(k�1)
kj )
with base case: d
(0)
ij = wij .
Floyd-Warshall(W )
1. n = rows[W ]
2. D(0) = W
3. for k = 1 to n
4. for i = 1 to n
5. for j = 1 to n
6. D(k)[i, j] = min{D(k�1)[i, j], D(k�1)[i, k] +D(k�1)[k, j]}
7. return (D(n))
Chapter 25. All-pairs shortest paths
Floyd-Warshall algorithm
intermediate vertices on a path vi vj : those other than vi and vj .
Define: d
(k)
ij to be the shortest path distance from vi to vj
with no intermediate vertices of indexes higher than k.
Thus
d
(k)
ij = min(d
(k�1)
ij , d
(k�1)
ik + d
(k�1)
kj )
with base case: d
(0)
ij = wij .
Floyd-Warshall(W )
1. n = rows[W ]
2. D(0) = W
3. for k = 1 to n
4. for i = 1 to n
5. for j = 1 to n
6. D(k)[i, j] = min{D(k�1)[i, j], D(k�1)[i, k] +D(k�1)[k, j]}
7. return (D(n))
Chapter 25. All-pairs shortest paths
Floyd-Warshall algorithm
intermediate vertices on a path vi vj : those other than vi and vj .
Define: d
(k)
ij to be the shortest path distance from vi to vj
with no intermediate vertices of indexes higher than k. Thus
d
(k)
ij = min(d
(k�1)
ij , d
(k�1)
ik + d
(k�1)
kj )
with base case: d
(0)
ij = wij .
Floyd-Warshall(W )
1. n = rows[W ]
2. D(0) = W
3. for k = 1 to n
4. for i = 1 to n
5. for j = 1 to n
6. D(k)[i, j] = min{D(k�1)[i, j], D(k�1)[i, k] +D(k�1)[k, j]}
7. return (D(n))
Chapter 25. All-pairs shortest paths
Floyd-Warshall algorithm
intermediate vertices on a path vi vj : those other than vi and vj .
Define: d
(k)
ij to be the shortest path distance from vi to vj
with no intermediate vertices of indexes higher than k. Thus
d
(k)
ij = min(d
(k�1)
ij , d
(k�1)
ik + d
(k�1)
kj )
with base case: d
(0)
ij = wij .
Floyd-Warshall(W )
1. n = rows[W ]
2. D(0) = W
3. for k = 1 to n
4. for i = 1 to n
5. for j = 1 to n
6. D(k)[i, j] = min{D(k�1)[i, j], D(k�1)[i, k] +D(k�1)[k, j]}
7. return (D(n))
Chapter 25. All-pairs shortest paths
Floyd-Warshall algorithm
intermediate vertices on a path vi vj : those other than vi and vj .
Define: d
(k)
ij to be the shortest path distance from vi to vj
with no intermediate vertices of indexes higher than k. Thus
d
(k)
ij = min(d
(k�1)
ij , d
(k�1)
ik + d
(k�1)
kj )
with base case: d
(0)
ij = wij .
Floyd-Warshall(W )
1. n = rows[W ]
2. D(0) = W
3. for k = 1 to n
4. for i = 1 to n
5. for j = 1 to n
6. D(k)[i, j] = min{D(k�1)[i, j], D(k�1)[i, k] +D(k�1)[k, j]}
7. return (D(n))
Chapter 25. All-pairs shortest paths
Floyd-Warshall algorithm
intermediate vertices on a path vi vj : those other than vi and vj .
Define: d
(k)
ij to be the shortest path distance from vi to vj
with no intermediate vertices of indexes higher than k. Thus
d
(k)
ij = min(d
(k�1)
ij , d
(k�1)
ik + d
(k�1)
kj )
with base case: d
(0)
ij = wij .
Floyd-Warshall(W )
1. n = rows[W ]
2. D(0) = W
3. for k = 1 to n
4. for i = 1 to n
5. for j = 1 to n
6. D(k)[i, j] = min{D(k�1)[i, j], D(k�1)[i, k] +D(k�1)[k, j]}
7. return (D(n))
Chapter 25. All-pairs shortest paths
Floyd-Warshall algorithm
intermediate vertices on a path vi vj : those other than vi and vj .
Define: d
(k)
ij to be the shortest path distance from vi to vj
with no intermediate vertices of indexes higher than k. Thus
d
(k)
ij = min(d
(k�1)
ij , d
(k�1)
ik + d
(k�1)
kj )
with base case: d
(0)
ij = wij .
Floyd-Warshall(W )
1. n = rows[W ]
2. D(0) = W
3. for k = 1 to n
4. for i = 1 to n
5. for j = 1 to n
6. D(k)[i, j] = min{D(k�1)[i, j], D(k�1)[i, k] +D(k�1)[k, j]}
7. return (D(n))
Chapter 25. All-pairs shortest paths
Floyd-Warshall algorithm
intermediate vertices on a path vi vj : those other than vi and vj .
Define: d
(k)
ij to be the shortest path distance from vi to vj
with no intermediate vertices of indexes higher than k. Thus
d
(k)
ij = min(d
(k�1)
ij , d
(k�1)
ik + d
(k�1)
kj )
with base case: d
(0)
ij = wij .
Floyd-Warshall(W )
1. n = rows[W ]
2. D(0) = W
3. for k = 1 to n
4. for i = 1 to n
5. for j = 1 to n
6. D(k)[i, j] = min{D(k�1)[i, j], D(k�1)[i, k] +D(k�1)[k, j]}
7. return (D(n))
Chapter 25. All-pairs shortest paths
Floyd-Warshall algorithm
intermediate vertices on a path vi vj : those other than vi and vj .
Define: d
(k)
ij to be the shortest path distance from vi to vj
with no intermediate vertices of indexes higher than k. Thus
d
(k)
ij = min(d
(k�1)
ij , d
(k�1)
ik + d
(k�1)
kj )
with base case: d
(0)
ij = wij .
Floyd-Warshall(W )
1. n = rows[W ]
2. D(0) = W
3. for k = 1 to n
4. for i = 1 to n
5. for j = 1 to n
6. D(k)[i, j] = min{D(k�1)[i, j], D(k�1)[i, k] +D(k�1)[k, j]}
7. return (D(n))
Chapter 25. All-pairs shortest paths
Floyd-Warshall algorithm
intermediate vertices on a path vi vj : those other than vi and vj .
Define: d
(k)
ij to be the shortest path distance from vi to vj
with no intermediate vertices of indexes higher than k. Thus
d
(k)
ij = min(d
(k�1)
ij , d
(k�1)
ik + d
(k�1)
kj )
with base case: d
(0)
ij = wij .
Floyd-Warshall(W )
1. n = rows[W ]
2. D(0) = W
3. for k = 1 to n
4. for i = 1 to n
5. for j = 1 to n
6. D(k)[i, j] = min{D(k�1)[i, j], D(k�1)[i, k] +D(k�1)[k, j]}
7. return (D(n))
Chapter 25. All-pairs shortest paths
Floyd-Warshall algorithm
intermediate vertices on a path vi vj : those other than vi and vj .
Define: d
(k)
ij to be the shortest path distance from vi to vj
with no intermediate vertices of indexes higher than k. Thus
d
(k)
ij = min(d
(k�1)
ij , d
(k�1)
ik + d
(k�1)
kj )
with base case: d
(0)
ij = wij .
Floyd-Warshall(W )
1. n = rows[W ]
2. D(0) = W
3. for k = 1 to n
4. for i = 1 to n
5. for j = 1 to n
6. D(k)[i, j] = min{D(k�1)[i, j], D(k�1)[i, k] +D(k�1)[k, j]}
7. return (D(n))
Chapter 25. All-pairs shortest paths
Chapter 25. All-pairs shortest paths
• Constructing a shortest path
• for each vi and each vj , to remember the last step to reach j.
predecessor matrix ⇡, recursively defined as
⇡
(0)
ij = NULL if i = j or wij = 1, or
⇡
(0)
ij = i if i 6= j and wij < 1.
⇡
(k)
ij = ⇡
(k�1)
ij if d
(k�1)
ij d
(k�1)
ik + d
(k�1)
kj , or
⇡
(k)
ij = ⇡
(k�1)
kj if d
(k�1)
ij > d
(k�1)
ik + d
(k�1)
kj
Chapter 25. All-pairs shortest paths
• Constructing a shortest path
• for each vi and each vj , to remember the last step to reach j.
predecessor matrix ⇡, recursively defined as
⇡
(0)
ij = NULL if i = j or wij = 1, or
⇡
(0)
ij = i if i 6= j and wij < 1.
⇡
(k)
ij = ⇡
(k�1)
ij if d
(k�1)
ij d
(k�1)
ik + d
(k�1)
kj , or
⇡
(k)
ij = ⇡
(k�1)
kj if d
(k�1)
ij > d
(k�1)
ik + d
(k�1)
kj
Chapter 25. All-pairs shortest paths
• Constructing a shortest path
• for each vi and each vj , to remember the last step to reach j.
predecessor matrix ⇡, recursively defined as
⇡
(0)
ij = NULL if i = j or wij = 1, or
⇡
(0)
ij = i if i 6= j and wij < 1.
⇡
(k)
ij = ⇡
(k�1)
ij if d
(k�1)
ij d
(k�1)
ik + d
(k�1)
kj , or
⇡
(k)
ij = ⇡
(k�1)
kj if d
(k�1)
ij > d
(k�1)
ik + d
(k�1)
kj
Chapter 25. All-pairs shortest paths
• Constructing a shortest path
• for each vi and each vj , to remember the last step to reach j.
predecessor matrix ⇡, recursively defined as
⇡
(0)
ij = NULL if i = j or wij = 1, or
⇡
(0)
ij = i if i 6= j and wij < 1.
⇡
(k)
ij = ⇡
(k�1)
ij if d
(k�1)
ij d
(k�1)
ik + d
(k�1)
kj , or
⇡
(k)
ij = ⇡
(k�1)
kj if d
(k�1)
ij > d
(k�1)
ik + d
(k�1)
kj
Chapter 25. All-pairs shortest paths
• Constructing a shortest path
• for each vi and each vj , to remember the last step to reach j.
predecessor matrix ⇡, recursively defined as
⇡
(0)
ij = NULL if i = j or wij = 1, or
⇡
(0)
ij = i if i 6= j and wij < 1.
⇡
(k)
ij = ⇡
(k�1)
ij if d
(k�1)
ij d
(k�1)
ik + d
(k�1)
kj , or
⇡
(k)
ij = ⇡
(k�1)
kj if d
(k�1)
ij > d
(k�1)
ik + d
(k�1)
kj
Chapter 25. All-pairs shortest paths
• Constructing a shortest path
• for each vi and each vj , to remember the last step to reach j.
predecessor matrix ⇡, recursively defined as
⇡
(0)
ij = NULL if i = j or wij = 1, or
⇡
(0)
ij = i if i 6= j and wij < 1.
⇡
(k)
ij = ⇡
(k�1)
ij if d
(k�1)
ij d
(k�1)
ik + d
(k�1)
kj , or
⇡
(k)
ij = ⇡
(k�1)
kj if d
(k�1)
ij > d
(k�1)
ik + d
(k�1)
kj
Chapter 25. All-pairs shortest paths
• Constructing a shortest path
• for each vi and each vj , to remember the last step to reach j.
predecessor matrix ⇡, recursively defined as
⇡
(0)
ij = NULL if i = j or wij = 1, or
⇡
(0)
ij = i if i 6= j and wij < 1.
⇡
(k)
ij = ⇡
(k�1)
ij if d
(k�1)
ij d
(k�1)
ik + d
(k�1)
kj , or
⇡
(k)
ij = ⇡
(k�1)
kj if d
(k�1)
ij > d
(k�1)
ik + d
(k�1)
kj
Chapter 25. All-pairs shortest paths
Summary of shortest path algorithms
1. Bellman-Ford’s algorithm (able to detect negative weight cycles)
2. DAG Shortest Paths (use topological sorting) [Lawler]
3. Dijkstra’s algorithm (assuming non-negative weights)
4. Matrix multiplication (DP) [Lawler, folklore]
5. Floyd-Warshall algorithm (DP)
Chapter 25. All-pairs shortest paths
Summary of shortest path algorithms
1. Bellman-Ford’s algorithm (able to detect negative weight cycles)
2. DAG Shortest Paths (use topological sorting) [Lawler]
3. Dijkstra’s algorithm (assuming non-negative weights)
4. Matrix multiplication (DP) [Lawler, folklore]
5. Floyd-Warshall algorithm (DP)
Chapter 25. All-pairs shortest paths
Summary of shortest path algorithms
1. Bellman-Ford’s algorithm (able to detect negative weight cycles)
2. DAG Shortest Paths (use topological sorting) [Lawler]
3. Dijkstra’s algorithm (assuming non-negative weights)
4. Matrix multiplication (DP) [Lawler, folklore]
5. Floyd-Warshall algorithm (DP)
Chapter 25. All-pairs shortest paths
Summary of shortest path algorithms
1. Bellman-Ford’s algorithm (able to detect negative weight cycles)
2. DAG Shortest Paths (use topological sorting) [Lawler]
3. Dijkstra’s algorithm (assuming non-negative weights)
4. Matrix multiplication (DP) [Lawler, folklore]
5. Floyd-Warshall algorithm (DP)
Chapter 25. All-pairs shortest paths
Summary of shortest path algorithms
1. Bellman-Ford’s algorithm (able to detect negative weight cycles)
2. DAG Shortest Paths (use topological sorting) [Lawler]
3. Dijkstra’s algorithm (assuming non-negative weights)
4. Matrix multiplication (DP) [Lawler, folklore]
5. Floyd-Warshall algorithm (DP)
Chapter 25. All-pairs shortest paths
Summary of shortest path algorithms
1. Bellman-Ford’s algorithm (able to detect negative weight cycles)
2. DAG Shortest Paths (use topological sorting) [Lawler]
3. Dijkstra’s algorithm (assuming non-negative weights)
4. Matrix multiplication (DP) [Lawler, folklore]
5. Floyd-Warshall algorithm (DP)