CS61B Lecture #34
• Today: A* search, Minimum spanning trees, union-find.
Last modified: Fri Apr 17 14:59:07 2020 CS61B: Lecture #34 1
Copyright By PowCoder代写 加微信 powcoder
Point-to-Point Shortest Path
• Dijkstra’s algorithm gives you shortest paths from a particular given vertex to all others in a graph.
• But suppose you’re only interested in getting to a particular vertex?
• Because the algorithm finds paths in order of length, you could sim-
ply run it and stop when you get to the vertex you want.
• But, this can be really wasteful.
• For example, to travel by road from Denver to a destination on lower Fifth Avenue in City is about 1750 miles (says Google).
• But traveling from Denver to the Gourmet Ghetto in Berkeley is about 1650 miles.
• So, we’d explore much of California, Nevada, Arizona, etc. before we found our destination, even though these are all in the wrong direction!
• Situation even worse when graph is infinite, generated on the fly.
Last modified: Fri Apr 17 14:59:07 2020 CS61B: Lecture #34 2
• We’re looking for a path from vertex Denver to the desired NYC vertex.
• Suppose that we had a heuristic guess, h(V ), of the length of a path from any vertex V to NYC.
• And suppose that instead of visiting vertices in the fringe in order of their shortest known path to Denver, we order by the sum of that distance plus a heuristic estimate of the remaining distance to NYC: d(Denver, V ) + h(V ).
• In other words, we look at places that are reachable from places where we already know the shortest path to Denver and choose those that look like they will result in the shortest trip to NYC, guessing at the remaining distance.
• If the estimate is good, then we don’t look at, say, Grand Junction (250 miles west by road), because it’s in the wrong direction.
• The resulting algorithm is A* search.
• But for it to work, we must be careful about the heuristic.
Last modified: Fri Apr 17 14:59:07 2020 CS61B: Lecture #34 3
Illustration
North Platte 256
Grand Junction
243 Denver
• If the solid lines indicate distances from Denver we have deter- mined so far,
• Dijkstra’s algorithm would have us look at Grand Junction next (small- est distance from Denver).
• But if we add in the heuristic remaining distance to NYC (our goal), we choose North Platte instead.
Last modified: Fri Apr 17 14:59:07 2020 CS61B: Lecture #34 4
Admissible Heuristics for A* Search
• If our heuristic estimate for the distance to NYC is too high (i.e., larger than the actual path by road), then we may get to NYC without ever examining points along the shortest route.
• For example, if our heuristic decided that the midwest was literally the middle of nowhere, and h(C) = 2000 for C any city in Michigan or Indiana, we’d only find a path that detoured south through Kentucky.
• So to be admissible, h(C) must never overestimate d(C, NYC), the minimum path distance from C to NYC.
• On the other hand, h(C) = 0 will work (what is the result?), but yield a non-optimal algorithm.
Last modified: Fri Apr 17 14:59:07 2020 CS61B: Lecture #34 5
Consistency
• Suppose that we estimate h(Chicago) = 700, and h(Springfield, IL) = 200, where d(Chicago, Springfield) = 200.
• So by driving 200 miles to Springfield, we guess that we are sud- denly 500 miles closer to NYC.
• This is admissible, since both estimates are low, but it will mess up our algorithm.
• Specifically, will require that we put processed nodes back into the fringe, in case our estimate was wrong.
• So (in this course, anyway) we also require consistent heuristics: h(A) ≤ h(B) + d(A, B), as for the triangle inequality.
• All consistent heuristics are admissible (why?).
• For project 3, distance “as the crow flies” is a good h(·) in the trip
application.
• Demo of A* search (and others) is in cs61b-software and on the
instructional machines as graph-demo.
Last modified: Fri Apr 17 14:59:07 2020 CS61B: Lecture #34 6
Summary of Shortest Paths
• Dijkstra’s algorithm finds a shortest-path tree computing giving (backwards) shortest paths in a weighted graph from a given start- ing node to all other nodes.
• Time required =
– Time to remove V nodes from priority queue +
– Time to update all neighbors of each of these nodes and add or reorder them in queue (E lg E)
–∈Θ(V lgV +ElgV)=Θ((V +E)lgV)
• A* search searches for a shortest path to a particular target node.
• Same as Dijkstra’s algorithm, except:
– Stop when we take target from queue.
– Order queue by estimated distance to start + heuristic guess of
remaining distance (h(v) = d(v, target))
– Heuristic must not overestimate distance and obey triangle in-
equality (d(a, b) + d(b, c) ≥ d(a, c)).
Last modified: Fri Apr 17 14:59:07 2020 CS61B: Lecture #34 7
Minimum Spanning Trees
• Problem: Given a set of places and distances between them (assume always positive), find a set of connecting roads of minimum total length that allows travel between any two.
• The routes you get will not necessarily be shortest paths.
• Easy to see that such a set of connecting roads and places must form a tree, because removing one road in a cycle still allows all to be reached.
Last modified: Fri Apr 17 14:59:07 2020 CS61B: Lecture #34 8
Minimum Spanning Trees by Prim’s Algorithm
• Idea is to grow a tree starting from an arbitrary node.
• At each step, add the shortest edge connecting some node already
in the tree to one that isn’t yet. • Why must this work?
PriorityQueue fringe;
For each node v { v.dist() = ∞; v.parent() = null; } Choose an arbitrary starting node, s;
s.dist() = 0;
fringe = priority queue ordered by smallest .dist();
add all vertices to fringe;
while (!fringe.isEmpty()) {
Vertex v = fringe.removeFirst();
For each edge(v,w) {
if (w ∈ fringe && weight(v,w) < w.dist())
{ w.dist() = weight(v, w); w.parent() = v; } }
353 7 D|∞ 4
362 G|∞ 1 H|∞
Last modified: Fri Apr 17 14:59:07 2020
CS61B: Lecture #34 9
Minimum Spanning Trees by Prim’s Algorithm
• Idea is to grow a tree starting from an arbitrary node.
• At each step, add the shortest edge connecting some node already
in the tree to one that isn’t yet. • Why must this work?
PriorityQueue fringe;
For each node v { v.dist() = ∞; v.parent() = null; } Choose an arbitrary starting node, s;
s.dist() = 0;
fringe = priority queue ordered by smallest .dist();
add all vertices to fringe;
while (!fringe.isEmpty()) {
Vertex v = fringe.removeFirst();
For each edge(v,w) {
if (w ∈ fringe && weight(v,w) < w.dist())
{ w.dist() = weight(v, w); w.parent() = v; } }
7 D|3 4 E|∞ 1 F|∞
362 G|7 1 H|∞
Last modified: Fri Apr 17 14:59:07 2020
CS61B: Lecture #34 10
Minimum Spanning Trees by Prim’s Algorithm
• Idea is to grow a tree starting from an arbitrary node.
• At each step, add the shortest edge connecting some node already
in the tree to one that isn’t yet. • Why must this work?
PriorityQueue fringe;
For each node v { v.dist() = ∞; v.parent() = null; } Choose an arbitrary starting node, s;
s.dist() = 0;
fringe = priority queue ordered by smallest .dist();
add all vertices to fringe;
while (!fringe.isEmpty()) {
Vertex v = fringe.removeFirst();
For each edge(v,w) {
if (w ∈ fringe && weight(v,w) < w.dist())
{ w.dist() = weight(v, w); w.parent() = v; } }
7 D|3 4 E|3 1 F|∞
362 G|7 1 H|∞
Last modified: Fri Apr 17 14:59:07 2020
CS61B: Lecture #34 11
Minimum Spanning Trees by Prim’s Algorithm
• Idea is to grow a tree starting from an arbitrary node.
• At each step, add the shortest edge connecting some node already
in the tree to one that isn’t yet. • Why must this work?
PriorityQueue fringe;
For each node v { v.dist() = ∞; v.parent() = null; } Choose an arbitrary starting node, s;
s.dist() = 0;
fringe = priority queue ordered by smallest .dist();
add all vertices to fringe;
while (!fringe.isEmpty()) {
Vertex v = fringe.removeFirst();
For each edge(v,w) {
if (w ∈ fringe && weight(v,w) < w.dist())
{ w.dist() = weight(v, w); w.parent() = v; } }
353 7 D|3 4
362 G|7 1 H|2
Last modified: Fri Apr 17 14:59:07 2020
CS61B: Lecture #34
Minimum Spanning Trees by Prim’s Algorithm
• Idea is to grow a tree starting from an arbitrary node.
• At each step, add the shortest edge connecting some node already
in the tree to one that isn’t yet. • Why must this work?
PriorityQueue fringe;
For each node v { v.dist() = ∞; v.parent() = null; } Choose an arbitrary starting node, s;
s.dist() = 0;
fringe = priority queue ordered by smallest .dist();
add all vertices to fringe;
while (!fringe.isEmpty()) {
Vertex v = fringe.removeFirst();
For each edge(v,w) {
if (w ∈ fringe && weight(v,w) < w.dist())
{ w.dist() = weight(v, w); w.parent() = v; } }
353 7 D|3 4
362 G|7 1 H|2
Last modified: Fri Apr 17 14:59:07 2020
CS61B: Lecture #34
Minimum Spanning Trees by Prim’s Algorithm
• Idea is to grow a tree starting from an arbitrary node.
• At each step, add the shortest edge connecting some node already
in the tree to one that isn’t yet. • Why must this work?
PriorityQueue fringe;
For each node v { v.dist() = ∞; v.parent() = null; } Choose an arbitrary starting node, s;
s.dist() = 0;
fringe = priority queue ordered by smallest .dist();
add all vertices to fringe;
while (!fringe.isEmpty()) {
Vertex v = fringe.removeFirst();
For each edge(v,w) {
if (w ∈ fringe && weight(v,w) < w.dist())
{ w.dist() = weight(v, w); w.parent() = v; } }
353 7 D|3 4
362 G|1 1 H|2
Last modified: Fri Apr 17 14:59:07 2020
CS61B: Lecture #34
Minimum Spanning Trees by Prim’s Algorithm
• Idea is to grow a tree starting from an arbitrary node.
• At each step, add the shortest edge connecting some node already
in the tree to one that isn’t yet. • Why must this work?
PriorityQueue fringe;
For each node v { v.dist() = ∞; v.parent() = null; } Choose an arbitrary starting node, s;
s.dist() = 0;
fringe = priority queue ordered by smallest .dist();
add all vertices to fringe;
while (!fringe.isEmpty()) {
Vertex v = fringe.removeFirst();
For each edge(v,w) {
if (w ∈ fringe && weight(v,w) < w.dist())
{ w.dist() = weight(v, w); w.parent() = v; } }
353 7 D|3 4
362 G|1 1 H|2
Last modified: Fri Apr 17 14:59:07 2020
CS61B: Lecture #34
Minimum Spanning Trees by Prim’s Algorithm
• Idea is to grow a tree starting from an arbitrary node.
• At each step, add the shortest edge connecting some node already
in the tree to one that isn’t yet. • Why must this work?
PriorityQueue fringe;
For each node v { v.dist() = ∞; v.parent() = null; } Choose an arbitrary starting node, s;
s.dist() = 0;
fringe = priority queue ordered by smallest .dist();
add all vertices to fringe;
while (!fringe.isEmpty()) {
Vertex v = fringe.removeFirst();
For each edge(v,w) {
if (w ∈ fringe && weight(v,w) < w.dist())
{ w.dist() = weight(v, w); w.parent() = v; } }
353 7 D|3 4
362 G|1 1 H|2
Last modified: Fri Apr 17 14:59:07 2020
CS61B: Lecture #34
Minimum Spanning Trees by Prim’s Algorithm
• Idea is to grow a tree starting from an arbitrary node.
• At each step, add the shortest edge connecting some node already
in the tree to one that isn’t yet. • Why must this work?
PriorityQueue fringe;
For each node v { v.dist() = ∞; v.parent() = null; } Choose an arbitrary starting node, s;
s.dist() = 0;
fringe = priority queue ordered by smallest .dist();
add all vertices to fringe;
while (!fringe.isEmpty()) {
Vertex v = fringe.removeFirst();
For each edge(v,w) {
if (w ∈ fringe && weight(v,w) < w.dist())
{ w.dist() = weight(v, w); w.parent() = v; } }
353 7 D|3 4
362 G|1 1 H|2
Last modified: Fri Apr 17 14:59:07 2020
CS61B: Lecture #34
Minimum Spanning Trees by Kruskal’s Algorithm
• Observation: the shortest edge in a graph can always be part of a minimum spanning tree.
• In fact, if we have a bunch of subtrees of a MST, then the shortest edge that connects two of them can be part of a MST, combining the two subtrees into a bigger one.
Create one (trivial) subtree for each node in the graph;
for each edge(v,w), in increasing order of weight { if ( (v,w) connects two different subtrees ) {
Add (v,w) to MST;
Combine the two subtrees into one; }
Last modified: Fri Apr 17 14:59:07 2020
CS61B: Lecture #34 18
Union Find
• Kruskal’s algorithm required that we have a set of sets of nodes with two operations:
– Find which of the sets a given node belongs to.
– Replace two sets with their union, reassigning all the nodes in the
two original sets to this union.
• Obvious thing to do is to store a set number in each node, making finds fast.
• Union requires changing the set number in one of the two sets being merged; the smaller is better choice.
• This means an individual union can take Θ(N) time.
• Can union be fast?
Last modified: Fri Apr 17 14:59:07 2020 CS61B: Lecture #34 19
A Clever Trick
• Let’s choose to represent a set of nodes by one arbitrary represen- tative node in that set.
• Let every node contain a pointer to another node in the same set.
• Arrange for each pointer to represent the parent of a node in a tree
that has the representative node as its root.
• To find what set a node is in, follow parent pointers.
• To union two such trees, make one root point to the other (choose the root of the larger tree as the union representative).
Last modified: Fri Apr 17 14:59:07 2020
CS61B: Lecture #34 20
Path Compression
• This makes unioning really fast, but the find operation potentially slow (Ω(lg N )).
• So use the following trick: whenever we do a find operation, com- press the path to the root, so that subsequent finds will be faster.
• That is, make each of the nodes in the path point directly to the root.
• Now union is very fast, and sequence of unions and finds each have very, very nearly constant amortized time.
• Example: find ‘g’ in last tree (result of compression on right): aa
Last modified: Fri Apr 17 14:59:07 2020
CS61B: Lecture #34 21
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com