CS计算机代考程序代写 algorithm 2b_Informed_Search.dvi

2b_Informed_Search.dvi

COMP9414 Informed Search 1

Informed (Heuristic) Search

� Uninformed methods of search are capable of systematically

exploring the state space in finding a goal state

� However, uninformed search methods are very inefficient

� With the aid of problem-specific knowledge, informed methods of

search are more efficient

� All implemented using a priority queue to store frontier nodes

UNSW ©W. Wobcke et al. 2019–2021

COMP9414: Artificial Intelligence

Lecture 2b: Informed Search

Wayne Wobcke

e-mail:w. .au

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Informed Search 3

Heuristics

� Heuristics are “rules of thumb”

� Heuristics are criteria, methods or principles for deciding which

among several alternative courses of action promises to be the most

effective in order to achieve some goal. “Heuristics” (Pearl 1984)

� Can make use of heuristics in deciding which is the most “promising”

path to take during search

� In search, heuristic must be an underestimate of actual cost to get

from current node to any goal — an admissible heuristic

� Denoted h(n); h(n) = 0 whenever n is a goal node

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Informed Search 2

This Lecture

� Heuristics

� Informed Search Methods

◮ Best-First Search

◮ Greedy Search

◮ A∗ Search

◮ Iterative Deepening A∗ Search

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Informed Search 5

Heuristics — Example

� 8-Puzzle — Manhattan distance (distance tile is out of place)

2 8 3

1 6 4

7 5

321

4

567

8

� Therefore h(n) = 1 + 1 + 0 + 0 + 0 + 1 + 1 + 2 = 6

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Informed Search 4

Heuristics — Example

� 8-Puzzle — number of tiles out of place

2 8 3

1 6 4

7 5

321

4

567

8

� Therefore h(n) = 5

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Informed Search 7

Greedy Search

� Idea: Expand node with the smallest estimated cost to reach the goal

� Use heuristic function h(n) to order nodes on frontier, i.e. choose

node for expansion with lowest h(n)

� Analysis

◮ Similar to depth-first search; tends to follow single path to goal

◮ Not optimal, incomplete

◮ Time O(bm); Space O(bm)

◮ However, good heuristic can reduce time and space complexity

significantly

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Informed Search 6

Heuristics — Example

� Another common heuristic is the straight-line distance (“as the crow

flies”) from node to goal

n

g

� Therefore h(n) = distance from n to g

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Informed Search 9

A∗ Search

� Idea: Use both cost of path generated and estimate to goal to order

nodes on the frontier

� g(n) = cost of path from start to n; h(n) = estimate from n to goal

� Order priority queue using function f (n) = g(n) + h(n)

� f (n) is the estimated cost of the cheapest solution extending this path

� Expand node from frontier with smallest f -value

� Essentially combines uniform-cost search and greedy search

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Informed Search 8

Greedy Search

(4)

(3)

(2)(0)

(1)

(4)(2)(4)(3)

(4)(3)(3)

(5)(3)(5)

(4) (4)

(4)

(4)

(3)

6

2 8 3

4

7

1

3

5

6 75

2

8

2 8 3

1

7

1

6

44

1 4

5

56

4

6 7

4

7 56

8

21

3

4

7 56

1

5

8

2 38 3

4

56

17

7

8 3

4

7

8

6

12

2

1

3

3

56

8

3

8

2

4

7 56

12

8

3

4

7 56

2

8 1

2

1

57

4

2

8

2 8 3

1 6 4

57

3

4

56

8

21

61

382

57

461

3

7

5

4

7 56

12

8 3

2

7

4

3

4

7 56

8 1 3

7 56

2

8

2

1

4

3

4

7 5

2

8 1

3

6

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Informed Search 11

A∗ Search

(1+5=6)

(0+4=4)

(5+0=5)

(4+1=5)

(3+4=7)(3+2=5)(3+4=7)(3+3=6)

(2+4=6)(2+3=5)(2+3=5)

(1+5=6)(1+3=4)

(5+2=7)

7 5

2 8 3

3

4

7 56

2

1

4

7 56

1

2 3

1

8

7 56

4

4

2 8 3

1

7 56

4

3

1 4

7 56

8

2

3

4

8

6

8

21

3

4

6

7

6

21

8

2 8 3

4

56

17

8 3

4

7 56

12

2

1 4

7 56

8

3

1

5

7

2 8 3

1 6 4

57

3

4

56

8

21

382

57

461

38

5

7

2

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Informed Search 10

A∗ Algorithm

OPEN — nodes on frontier; CLOSED – expanded nodes

OPEN = {〈s0,nil〉} where s0 is the initial state
while OPEN is not empty

remove from OPEN a node n = 〈s, p〉 with minimal f (n)
place n on CLOSED

if s is a goal state return success (with path p)

for each edge e connecting s and a successor state s′ with cost c

if 〈s′, p′〉 is on CLOSED then if cost(p⊕ e)< cost(p′) then remove 〈s′, p′〉 from CLOSED and put 〈s′, p⊕ e〉 on OPEN else if 〈s′, p′〉 is on OPEN then if cost(p⊕ e)< cost(p′) then replace 〈s′, p′〉 by 〈s′, p⊕ e〉 on OPEN else if 〈s′, p′′〉 is not on OPEN then put 〈s′, p⊕ e〉 on OPEN return failure UNSW ©W. Wobcke et al. 2019–2021 COMP9414 Informed Search 13 A∗ Search – Optimality � Conditions on state space graph ◮ Each node has a finite number of successors ◮ Every arc in the graph has cost greater than some ε > 0

� Condition on heuristic function h(n): admissibility

◮ For every node n, the heuristic never overestimates the actual cost

h∗(n) of reaching a goal from n, i.e. h(n)≤ h∗(n)

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Informed Search 12

A∗ Search — Analysis

Subject to conditions on next slide:

� Optimal (and optimally efficient)

� Complete

� Number of nodes searched (and stored) still exponential in worst case

◮ Unless the error in the heuristic grows no faster than the log of the

actual path cost h∗(n) of reaching a goal from n:

|h(n)−h∗(n)| ≤ O(log h∗(n))

◮ Which almost never happens: for many heuristics, this error is at

least proportional to the path cost

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Informed Search 15

Proof of the Optimality of the A∗ Algorithm

G: optimal goal node; G2: another goal node selected by A

n: node on frontier on optimal path to G; h∗(n): true cost to goal from n
Suppose A∗ chose G2 rather than n

Then: g(G2) = f (G2)≤ f (n) since G2 is a goal node and A
∗ chose G2

= g(n)+h(n) by definition
≤ g(n)+h∗(n) by admissibility
≤ f (G) since G is on a path from n
= g(G) since G is a goal node

This means G2 is also optimal, and hence, so is any node returned by A

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Informed Search 14

A∗ Search – Optimal Efficiency

� A∗ is optimally efficient for a given heuristic: of the optimal search

algorithms that expand search paths from the root node, there is no

other optimal algorithm that expands fewer nodes in finding a solution

� Monotonic heuristic — along any path, the f -cost never decreases

◮ Follows from triangle inequality

h(n)≤ cost(n,n′)+h(n′)

� Common property of admissible heuristics

◮ If this holds, don’t need CLOSED set – big saving

◮ If not, the path cost for n′ connected to n can be set to:

(Pathmax Equation) f (n′) = max( f (n), g(n′)+h(n′))

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Informed Search 17

Generating Heuristics

� Admissible heuristics can be derived from the exact

solution cost of a relaxed version of the problem

� If the rules of the 8-puzzle are relaxed so that a tile can move

anywhere, then #tiles-out-of-place gives the shortest solution

� If the rules are relaxed so that a tile can move to any adjacent square,

then Manhattan distance gives the shortest solution

� For TSP: let path be any structure that connects all cities

=⇒ minimum spanning tree heuristic

UNSW ©W. Wobcke et al. 2019–2021

COMP9414 Informed Search 16

Heuristics — Properties

� h2 dominates h1 iff h2(n)≥ h1(n) for any node n

� A∗ expands fewer nodes on average using h2 than h1

◮ Every node for which f (n)< f ∗ is expanded So n is expanded whenever h(n)< f ∗−g(n) So any node expanded using h2 is expanded using h1 ◮ Always better to use an (admissible) heuristic with higher values � Suppose there are a number of admissible heuristics for a problem h1(n), h2(n), . . . , hk(n) ◮ Then maxi≤khi(n) is a more powerful admissible heuristic ◮ Therefore can design a range of heuristics for special cases UNSW ©W. Wobcke et al. 2019–2021 COMP9414 Informed Search 19 Conclusion � Informed search makes use of problem-specific knowledge to guide progress of search � This can lead to a significant improvement in performance � Much research has gone into admissible heuristics � Even on the automatic generation of admissible heuristics UNSW ©W. Wobcke et al. 2019–2021 COMP9414 Informed Search 18 Iterative Deepening A∗ Search � IDA∗ performs repeated depth-bounded depth-first searches as in Iterative Deepening, however the bound is based on f (n) � Start by using f -value of initial state � If search ends without finding a solution, repeat with new bound of minimum f -value exceeding previous bound � IDA∗ is optimal and complete with the same provisos as A∗ � Due to depth-first search, space complexity = O( b f ∗ δ ) (where δ = smallest operator cost and f ∗ = optimal solution cost) — often O(bd) is a reasonable approximation � Another variant – SMA∗ (Simplified Memory-Bounded A∗) – makes full use of memory to avoid expanding previously expanded nodes UNSW ©W. Wobcke et al. 2019–2021