CS计算机代考程序代写 data structure AI algorithm 05UninformedSearch

05UninformedSearch

Uninformed
Search Algorithms
CITS3001 Algorithms, Agents and Artificial Intelligence

2021, Semester 2
Tim French
Department of Computer Science and Software Engineering
The University of Western Australia

Introduction

• We will formalise the definition of a problem
for an agent to solve, conceptualising

– The environment
– The goal to achieve
– The actions available
– etc.

• We will describe the fundamental search
algorithms available to agents

2

Problem Solving and Search

• We have seen that most intelligent agent models have
– Some knowledge of the state of the world
– A notion of how actions or operations change the state of the world
– A notion of the cost of each possible action
– One or more goals, or states of the world, that they would like to bring about

• Finding a sequence of actions that changes the world from its current state to a
desired goal state is a search problem

– Or a basic planning problem
• Usually we want to find the cheapest sequence

– The cost of a sequence of actions (or a path) is just the sum of their individual costs

• Search algorithms are the cornerstone of AI
• We will examine

– How all of the above concepts are formalised
– The most common search strategies

3

A Running Example

• Our running example (taken from AIMA) is a
simplified road map of part of Romania

• The state of the world is our current location
• The only operator available is to drive from one

city to a connected city
• The cost of an action is the distance between

the cities
• The start state is where we start from (Arad)
• The goal state is where we want to get to

(Bucharest)
• A solution is a sequence of cities that we drive

through
• In general the state of the world is described

abstractly, focussing only on the features
relevant to the problem

4

A second example: the 8-puzzle

• Slide the tiles in the puzzle until the goal state is reached
• The state is the current layout of the tiles
• The only operator is to slide a tile into the blank square

– Or to slide the blank square…
• The cost of each action is 1
• The start state is the puzzle’s initial configuration
• The goal state is as shown
• A solution is a sequence of tile-moves

5

3rd example: Missionaries and Cannibals

• Start state: 3 missionaries, 3 cannibals, and one boat that holds up to 2 people, all
on one side of a river

• Goal state: everybody on the other side of the river
• Current state: the positions of the people and the boat

– A state is legal only if no one gets eaten
– i.e. cannibals never outnumber missionaries

• Operator: 1 or 2 people cross the river in the boat
• Cost: 1

6

A Generalised Search Algorithm

• The fundamental idea is
– At any given moment we are in some state s
– s will usually offer several possible actions
– Choose one action to explore first
– Keep s and the other actions to explore later, in case the first one doesn’t deliver

• Action-selection is determined by a search strategy
• We picture a search tree of states, expanding outwards from the initial state of the

problem
GeneralSearch (problem, strategy):
initialise the tree with the initial state of problem
while no solution found

if there are no possible expansions left
then return failure
else use strategy to choose a leaf node x to expand
if x is a goal state

then return success
else expand x

add the new nodes to the tree 7

Exploring Romania

8

Alternative Formulation

• Given
– A set of possible states S
– A start state s0
– A goal function g(s) → {true, false}
– A terminal condition t(s) → {true, false}

• The data structure used to store U can
impose
an order in which the nodes will be visited

– e.g. a priority queue where nodes are stored
in the order in which they should be visited

9

U = {s0} — unvisited nodes
V = {} — visited nodes
while U ≠ {}

s = select a node from U
if s Î V — occurs check

then discard s
else if g(s)

then return success
else if t(s) — cut-off check

then discard s
else

V = V + {s}
U = U – {s} + successors(s)

Comparing Search Strategies

• The performance of search strategies is generally compared in four ways
• Completeness: is the strategy guaranteed to find a solution, assuming that there is

one?
• Optimality: is the strategy guaranteed to find the optimal solution?
• Time complexity: how long does the strategy take to find a solution?
• Space complexity: how much memory is needed to conduct the search?
• The complexities are often expressed in terms of

– b: maximum branching factor of the tree
– m: maximum depth of the search space
– d: depth of the least-cost solution

10

Uninformed Search Strategies

• Breadth-first search
– Expand the shallowest node next

• Uniform-cost search
– Expand the lowest-cost node next

• Depth-first search
– Expand the deepest node next

• Depth-limited search
– Depth-first, but with a cut-off depth

• Iterative deepening depth-first search
– Repeated depth-limited, with increasing cut-offs

• Bidirectional search
– Search from both ends concurrently

• In the next lecture, we will look at informed search strategies, that use additional
information

11

Breadth-first Search
• Expand the shallowest node next

– Expand all nodes at one level before
moving down

• Complete: yes, if b is finite
• Optimal: yes, if all step-costs are equal
• Time: O(1 + b + b2 + … + bd) = O(bd)
• Space: O(bd), because all of the nodes at

one level must be stored simultaneously
• Space is the big problem
• You might wait thirteen days to solve a

problem
– But who has a petabyte of memory!?

• Welcome to AI!

12

Uniform cost saerch

• Expand the lowest-cost node next
– Basically a version of breadth-first that allows for varying step-costs

• Complete: yes, if all step-costs ≥ 0
• Optimal: as above
• Time: O(n), where n is the number of nodes with cost less than the optimum
• Space: as above

13

Depth-first search

• Expand the deepest node next
– Follow one path until you can go no further, then backtrack to the last choice point and

try an alternative

14

Usually needs an occurs-check (as per
Page 10) to prevent looping

•Complete: no, fails in infinite-depth
spaces
•Optimal: no, could hit any solution
first
•Time: O(bm), follows paths “all the
way down”
•Space: O(bm), because it only needs
to store the current path plus untried
alternatives

Space is a huge advantage
The other metrics can be big
disadvantages

Depth-limited search

• Depth-first, but with a cut-off depth
– Terminate paths at depth l
– cf. t(s) on Page 10

• Sometimes used to apply depth-first search to infinite (or effectively infinite) spaces
– Find best solution with limited resources
– e.g. game-playing (Lecture 8)

• Works well if we have a good way to choose l
– e.g. the Romanian map has diameter 9

15

Iterative deepening depth-first search

• Repeated depth-limited, with increasing
cut-offs

• Probe deeper and deeper, iteratively
increasing l

16

• Complete: yes
• Optimal: yes, for constant step-costs

• And easily adapted to varying step-costs
• Time: O((d+1)b0 + db1 + (d–1)b2 + … + bd) =

O(bd)
• Space: O(bd)

The multiplying factors in the time complexity come
from the repeated expansion of nodes near the
root

But this is not normally a big problem
For typical values of b, the last layer of the tree

dominates the space requirements
And it’s worth it for the space complexity!

Iterative deepening allows a system to adapt
to resource limits
In this context it acts an anytime algorithm
Find an initial (hopefully usable) solution,

then try to find a better one
A common optimisation is start off l bigger than 0

Bi-directional search

• Search from both ends concurrently
• Usually expands many fewer nodes than unidirectional

– 2bd/2 << bd • But raises many other difficulties – There may be many goal states to start from – Formalising backward steps may be difficult – The “backwards branching factor” may be much bigger than b – The cost of checking when the two sides meet may be high • e.g. chess – There are many, many checkmate positions – Was the last move a capture? 17 Summary • Iterative deepening offers – the completeness and optimality of breadth-first – the space advantage of depth-first 18