CS计算机代考程序代写 algorithm COMP3702 Artificial Intelligence – Module 1: Search – Part 2

COMP3702 Artificial Intelligence – Module 1: Search – Part 2

COMP3702

Artificial Intelligence

Module 1: Search – Part 2

Dr Alina Bialkowski

Semester 2, 2021

The University of Queensland

School of Information Technology and Electrical Engineering

Week 3: Logistics

• Assignment 0 (due 4pm this Friday, 13 August)

• RiPPLE round 1 is open (due 4pm Friday, 20 August)

• Assignment 1 is available (due 4pm Friday, 27 August)

• Tutorials 2 and 3 will help you with Assignment 1

• Ed Discussion board is active

1

Table of contents

1. Informed search

2. Greedy best-first search

3. A* Search

2

Informed search

Blind vs informed search

• Blind search algorithms (e.g. UCS) use the cost from the root to the current node, g(n)
to prioritise which nodes to search.

• Informed search algorithms rely on heuristics, h(n) that give an estimated cost from the
current node to the goal to prioritise search.

• In general, informed search is faster than blind search
• However, it is more difficult to prove properties of informed search algorithms, as their

performance highly depends on the heuristics used

3

Blind vs informed search

• Informed search: Select which node to expand based on a function of the estimated cost
from the current node to the goal state

• Cost: f (n) = g(n) + h(n)
• g(n): Cost from root to node n
• h(n): Estimated cost from n to goal (usually based on heuristics)
• In informed search, the node is selected based on f(n), and f(n) must contain h(n)

Informed search algorithms

• Greedy best-first search
• A* search

4

Blind vs informed search

5

Blind vs informed search

6

Informed search using heuristics

Idea: Don’t ignore the goal when selecting paths.

• Often there is extra knowledge that can be used to guide the search: heuristics.

• h(n) is an estimate of the cost of the shortest path from node n to a goal node.

• h(n) needs to be efficient to compute.

• h can be extended to paths: h(n0, . . . , nk) = h(nk).

• h(n) is an underestimate if there is no path from n to a goal with cost less than h(n).

• An admissible heuristic is a nonnegative (≥ 0) heuristic function that does not
overestimate the actual cost of a path to a goal.

7

Example heuristic functions

• If the nodes are points on a Euclidean plane and the cost is the distance, h(n) can be the
straight-line distance from n to the closest goal (i.e. ignoring obstacles).

• If the nodes are locations and cost is time, we can use the distance to a goal divided by
the maximum speed (underestimate).

• If the goal is complicated, simple decision rules that return an approximate solution and
that are easy to compute can make for good heuristics

• A heuristic function can be found by solving a simpler (less constrained) version of the
problem.

8

Blind vs informed search

9

Greedy best-first search

Greedy best-first search

Greedy Best-First search is almost the same as UCS, with some key differences:

• Uses a priority queue to order expansion of fringe nodes

• The highest priority in priority queue for greedy best-first search is the node with the
smallest estimated cost from the current node to the goal

• The estimated cost-to-goal is given by the heuristic function, h(n) [instead of g(n) in UCS]

• If the list of paths on the frontier is [p1, p2, . . .]:
• p1 is selected to be expanded.
• Its successors are inserted into the priority queue.
• The highest-priority vertex is selected next (and it might be a newly expanded vertex).

10

Greedy best-first search: Properties and analysis

Complete? Will greedy best-first search find a solution?

• No (it depends on the heurisitc)

Generate optimal solution? Is greedy best-first search guaranteed to find the lowest cost path

to the goal?

• No

Complexity:

• Depends highly on the heuristic
• Worst case if the tree depth is finite: O(bm) where b is branching factor and m is

maximum depth of the tree (i.e. it can be exponentially bad).

11

Example — Navigating UQ

Heuristic values (to Bld7)

h(UQLake) = 100

h(Bld78) = 50

h(AEB) = 53

h(Wordsmith) = 1000 h(Bld42) = 50

h(Bld50) = 38

h(Bld51) = 30

h(Bld7) = 0

12

A* Search

A* Search

A* search uses both path cost and heuristic values

• It is a mix of uniform-cost and best-first search
• g(p) is the cost of path p from initial state to a node (UCS)
• h(p) estimates the cost from the end of p to a goal (GBFS)
• A* uses: f (p) = g(p) + h(p)

f (p) estimates the total path cost of going from a start node to a goal via p

start
path p
−→ n︸ ︷︷ ︸

g(p)

estimate−→ goal︸ ︷︷ ︸
h(p)︸ ︷︷ ︸

f (p)

13

A* Search Algorithm

• A* is a mix of uniform-cost and best-first search, algorithmically similar to both

• It treats the frontier as a priority queue ordered by f (p)

• Highest priority is the node with the lowest f value The function f (p) is the the shortest
path length from root to the node (g(p)) plus the estimated future reward from node p to

the goal (h(p))

• It always selects the node on the frontier with the lowest estimated distance from the start
to a goal node constrained to go via that node

14

Example — Navigating UQ

Heuristic values (to Bld7)

h(UQLake) = 100

h(Bld78) = 50

h(AEB) = 53

h(Wordsmith) = 1000 h(Bld42) = 50

h(Bld50) = 38

h(Bld51) = 30

h(Bld7) = 0

15

A* Search: Properties and analysis

• Will A* search find a solution?

• Is A* search guaranteed to find the shortest path?

• What is the time complexity as a function of length of the path selected?

• What is the space complexity as a function of length of the path selected?

• How does the goal affect the search?

16

Admissibility of A*

If there is a solution, A* always finds an optimal solution —as the first path to a goal

selected— if the following conditions are met:

1. the search graph branching factor b is finite

2. edge costs are bounded above zero (there is some � > 0 such that all of the edge costs are

greater than �), and

3. h(n) is > 0 and an underestimate of the cost of the shortest path from n to a goal node.

. . .we have seen 2 and 3 before:

A heuristic is admissible if it never overestimates the cost-to-goal

17

Why is A* admissible?

If a path p to a goal is selected from a frontier, can there be a shorter path to a goal?

• Suppose path p′ is on the frontier.
• Because p was chosen before p′, and h(p) = 0:

g(p) ≤ g(p′) + h(p′).

• Because h is an underestimate:

g(p′) + h(p′) ≤ g(p′′)

for any path p′′ to a goal that extends p′.

So g(p) ≤ g(p′′) for any other path p′′ to a goal.

18

How do good heuristics help?

Suppose c is the cost of an optimal solution. What happens to a path p where

• g(p) + h(p) < c • g(p) + h(p) = c • g(p) + h(p) > c

How can a better heuristic function help?

19

Example — Navigating UQ

Heuristic values (to Bld7)

h(UQLake) = 100

h(Bld78) = 50

h(AEB) = 53

h(Wordsmith) = 1000 h(Bld42) = 50

h(Bld50) = 38

h(Bld51) = 30

h(Bld7) = 0

20

Revisiting nodes

Optimality condition

21

Revisiting nodes

• Naive: Revisit all vs Discard all
• Optimality vs complexity

• Discard a revisited node if the cost to the node via this new path is more than the cost of
reaching the node via a previously found path.

• The solution will be optimal, but may still be quite inefficient, due to revisiting nodes that
are not in the optimal path.

• Works with the consistent (monotonic) heuristic
• h(n) ≤ c(n, a, n′) + h(n′)

22

Consistent heuristic

• h(n) is consistent if for every node (n) and every successor (n′) of n by any action (a), the
estimated cost of reach the goal for n is not greater than the step cost of getting to n′ +

estimated cost of n′ to the goal

• i.e. h(n) ≤ c(n, a, n′) + h(n′)

• If the heuristic is consistent, when A* expands a node n, the path to n is optimal

• Therefore, we don’t need to revisit nodes that have been expanded

23

Consistent heuristic

Consistent → admissible, but not the other way around

• Is consistent heuristic always good?

24

How to generate heuristics

• Information about the problem (domain knowledge)
• Knowledge about the sub-problems
• Learn from prior results of solving the same or similar problems

• Examples?
• Euclidean distance
• Hamming distance
• Manhattan distance
• Number of inversions in the 8-puzzle

25

Attributions and References

Thanks to Dr Archie Chapman and Dr Hanna Kurniawati for their materials.

Many of the slides are adapted from David Poole and Alan Mackworth, Artificial Intelligence: foundations of

computational agents, 2E, CUP, 2017 http://https://artint.info/. These materials are copyright c© Poole
and Mackworth, 2017, licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0

International License.

Other materials derived from Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, 3E,

Prentice Hall, 2009.

http://https://artint.info/

Informed search
Greedy best-first search
A* Search
Appendix