CS计算机代考程序代写 AI algorithm PowerPoint Presentation

PowerPoint Presentation

CSE 3521: Graph Search
[Many slides are adapted from the UC Berkeley. CS188 Intro to AI at UC Berkeley and previous CSE 3521 course at OSU.]

Recap: Uninformed vs. informed search
Uninformed search
Given no information about the problem (other than its definition)
Find solutions by systematically visiting new states and testing for goal
Algorithms: DFS, BFS, UCS

Informed search
Given some ideas of where to look for solutions
Use problem-specific knowledge: heuristics
Algorithms: Greedy, A*

Failure to detect repeated states can cause exponentially more work.

Search Tree
State Graph
Tree search: Extra work!

Graph search
In BFS, for example, we shouldn’t bother expanding the circled nodes (why?)
S
a
b
d
p
a
c
e
p
h
f
r
q
q
c
G
a
q
e
p
h
f
r
q
q
c
G
a

Graph search
Idea: never expand a state twice
How to implement:
Tree search + set of expanded states (“closed set”)
Expand the search tree node-by-node, but…
Before expanding a node, check to make sure its state has never been expanded before
If not new, skip it; if new, add it to closed set
Important: store the closed set as a set, not a list
Can graph search wreck completeness? Why/why not?
How about optimality?

A* Graph Search goes wrong?
S
A
B
C
G
1
1
1
2
3
h=2
h=1
h=4
h=1
h=0
S (0+2)
A (1+4)
B (1+1)
C (2+1)
G (5+0)
C (3+1)
G (6+0)
State space graph
Search tree
The node (S-B-C) is expanded earlier than the node (S-A-C), making the node (S-A-C) not possible to be expanded in graph search!

Consistency of heuristics for A*
Main idea: estimated heuristic costs ≤ actual costs
Admissibility: heuristic cost ≤ actual cost to goal
h(A) ≤ actual cost from A to G
Consistency: heuristic “arc” cost ≤ actual cost for each arc
h(A) – h(C) ≤ cost(A to C)
h(G) = 0 in both cases, where G is a goal state
Consequences of consistency:
The f value along a path never decreases
Whenever A* expands a state, it has already found an optimal path to this state
h(A) ≤ cost(A to C) + h(C)
A* graph search is optimal

3
A
C
G
h=4
h=1
1
h=2

Consistency Violation

A
C
h(A)
h(C)
c(A,C)
(triangle inequality)
If h tells that A is 100 units from the goal, then moving from A along an arc costing 10 units should not lead to a node C that h estimates to be less than 90 units away from the goal

8 Puzzle I
Heuristic: Number of tiles misplaced
Consistent heuristic

Start State S
Goal State

8 Puzzle II
Total Manhattan distance

Consistent heuristic

h(S) =

3 + 1 + 2 + … = 18

Start State S
Goal State

A Heuristic for robot navigation
Both heuristics are consistent

Euclidean distance

Manhattan distance

xn
yn

n

xg
yg

Optimality
Tree search:
A* is optimal if heuristic is admissible
UCS is a special case (h = 0)

Graph search:
A* optimal if heuristic is consistent
UCS is optimal (h = 0 is consistent)

Consistency implies admissibility
A consistent heuristic is always admissible

In general, most natural admissible heuristics tend to be consistent, especially if they are created from relaxed problems

A*: Summary
A* uses both backward costs and (estimates of) forward costs

A* is optimal with admissible / consistent heuristics

Heuristic design is key: often use relaxed problems

Tree search pseudo-code

Graph search pseudo-code

Graph search pseudo-code

A
B
C
Fringe: [‘A’]
Closed: []
Fringe: []
Closed: [‘A’]
Pop ‘A’
Check ‘A’
Add successors but “no” checking
Fringe: [‘B’, ‘B’]
Closed: [‘A’]
Fringe: [‘B’]
Closed: [‘A’, ‘B’]
Pop ‘B’
Check ‘B’
Fringe: [‘B’, ‘C’]
Closed: [‘A’, ‘B’]
Add successors but “no” checking
Check the closed when you expand a node, not when you add nodes into the fringe

Graph search pseudo-code (wrong line order)

A
B
C
Fringe: [‘A’]
Closed: []
Fringe: []
Closed: [‘A’]
Pop ‘A’
but “no” checking
Add successors Check
Fringe: [‘B’, ‘B’]
Closed: [‘A’]
Fringe: [‘B’]
Closed: [‘A’, ‘B’]
Pop ‘B’
but “no” checking
Fringe: [‘B’, ‘C’]
Closed: [‘A’, ‘B’]
Add successors Check

Advanced: Optimality of A* Graph Search

Advanced: Optimality of A* Graph Search
Sketch: consider what A* does with a consistent heuristic:

Fact 1: In tree search, A* expands nodes in increasing total f value (f-contours)

Fact 2: For every state s, nodes that reach s optimally are expanded before nodes that reaches sub-optimally

Result: A* graph search is optimal

f  3
f  2
f  1

Advanced: Optimality of A* Graph Search
Consider what A* does:
Expands nodes in increasing total f value (f-contours)
Reminder: f(n) = g(n) + h(n) = cost to n + heuristic
Proof idea: the optimal goal(s) have the lowest f value, so it must get expanded first

f  3
f  2
f  1
There’s a problem with this argument. Is what we are assuming true?

Advanced: Optimality of A* Graph Search
Proof:
New possible problem: some n on path to G* isn’t in queue when we need it, because some worse n’ for the same state dequeued and expanded first (disaster!)
Take the highest such n in tree
Let p be the ancestor of n that was on the queue when n’ was popped
f(p) < f(n) because of consistency f(n) < f(n’) because n’ is suboptimal p would have been expanded before n’ Contradiction! /docProps/thumbnail.jpeg