chapter03.dvi
Problem solving and search
Chapter 3
Chapter 3 1
Outline
♦ Problem-solving agents
♦ Problem types
♦ Problem formulation
♦ Example problems
♦ Basic search algorithms
Chapter 3 2
Problem-solving agents
Restricted form of general agent:
function Simple-Problem-Solving-Agent(percept) returns an action
static: seq, an action sequence, initially empty
state, some description of the current world state
goal, a goal, initially null
problem, a problem formulation
state←Update-State(state, percept)
if seq is empty then
goal←Formulate-Goal(state)
problem←Formulate-Problem(state, goal)
seq←Search(problem)
action←Recommendation(seq, state)
seq←Remainder(seq, state)
return action
Note: this is offline problem solving; solution executed “eyes closed.”
Online problem solving involves acting without complete knowledge.
Chapter 3 3
Example: Romania
On holiday in Romania; currently in Arad.
Flight leaves tomorrow from Bucharest
Formulate goal:
be in Bucharest
Formulate problem:
states: various cities
actions: drive between cities
Find solution:
sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
Chapter 3 4
Example: Romania
Giurgiu
Vilcea
Bucharest
71
75
118
111
70
75
120
151
140
99
80
97
101
211
138
146 85
90
98
142
92
87
86
Chapter 3 5
Problem types
Deterministic, fully observable =⇒ single-state problem
Agent knows exactly which state it will be in; solution is a sequence
Non-observable =⇒ conformant problem
Agent may have no idea where it is; solution (if any) is a sequence
Nondeterministic and/or partially observable =⇒ contingency problem
percepts provide new information about current state
solution is a contingent plan or a policy
often interleave search, execution
Unknown state space =⇒ exploration problem (“online”)
Chapter 3 6
Example: vacuum world
Single-state, start in #5. Solution??
1 2
3 4
5 6
7 8
Chapter 3 7
Example: vacuum world
Single-state, start in #5. Solution??
[Right, Suck]
Conformant, start in {1, 2, 3, 4, 5, 6, 7, 8}
e.g., Right goes to {2, 4, 6, 8}. Solution??
1 2
3 4
5 6
7 8
Chapter 3 8
Example: vacuum world
Single-state, start in #5. Solution??
[Right, Suck]
Conformant, start in {1, 2, 3, 4, 5, 6, 7, 8}
e.g., Right goes to {2, 4, 6, 8}. Solution??
[Right, Suck, Left, Suck]
Contingency, start in #5
Murphy’s Law: Suck can dirty a clean carpet
Local sensing: dirt, location only.
Solution??
1 2
3 4
5 6
7 8
Chapter 3 9
Example: vacuum world
Single-state, start in #5. Solution??
[Right, Suck]
Conformant, start in {1, 2, 3, 4, 5, 6, 7, 8}
e.g., Right goes to {2, 4, 6, 8}. Solution??
[Right, Suck, Left, Suck]
Contingency, start in #5
Murphy’s Law: Suck can dirty a clean carpet
Local sensing: dirt, location only.
Solution??
[Right, if dirt then Suck]
1 2
3 4
5 6
7 8
Chapter 3 10
Single-state problem formulation
A problem is defined by four items:
initial state e.g., “at Arad”
successor function S(x) = set of action–state pairs
e.g., S(Arad) = {〈Arad→ Zerind, Zerind〉, . . .}
goal test, can be
explicit, e.g., x = “at Bucharest”
implicit, e.g., NoDirt(x)
path cost (additive)
e.g., sum of distances, number of actions executed, etc.
c(x, a, y) is the step cost, assumed to be ≥ 0
A solution is a sequence of actions
leading from the initial state to a goal state
Chapter 3 11
Selecting a state space
Real world is absurdly complex
⇒ state space must be abstracted for problem solving
(Abstract) state = set of real states
(Abstract) action = complex combination of real actions
e.g., “Arad → Zerind” represents a complex set
of possible routes, detours, rest stops, etc.
For guaranteed realizability, any real state “in Arad”
must get to some real state “in Zerind”
(Abstract) solution =
set of real paths that are solutions in the real world
Each abstract action should be “easier” than the original problem!
Chapter 3 12
Example: vacuum world state space graph
R
L
S S
S S
R
L
R
L
R
L
S
SS
S
L
L
LL R
R
R
R
states??
actions??
goal test??
path cost??
Chapter 3 13
Example: vacuum world state space graph
R
L
S S
S S
R
L
R
L
R
L
S
SS
S
L
L
LL R
R
R
R
states??: integer dirt and robot locations (ignore dirt amounts etc.)
actions??
goal test??
path cost??
Chapter 3 14
Example: vacuum world state space graph
R
L
S S
S S
R
L
R
L
R
L
S
SS
S
L
L
LL R
R
R
R
states??: integer dirt and robot locations (ignore dirt amounts etc.)
actions??: Left, Right, Suck, NoOp
goal test??
path cost??
Chapter 3 15
Example: vacuum world state space graph
R
L
S S
S S
R
L
R
L
R
L
S
SS
S
L
L
LL R
R
R
R
states??: integer dirt and robot locations (ignore dirt amounts etc.)
actions??: Left, Right, Suck, NoOp
goal test??: no dirt
path cost??
Chapter 3 16
Example: vacuum world state space graph
R
L
S S
S S
R
L
R
L
R
L
S
SS
S
L
L
LL R
R
R
R
states??: integer dirt and robot locations (ignore dirt amounts etc.)
actions??: Left, Right, Suck, NoOp
goal test??: no dirt
path cost??: 1 per action (0 for NoOp)
Chapter 3 17
Example: The 8-puzzle
2
Start State Goal State
51 3
4 6
7 8
5
1
2
3
4
6
7
8
5
states??
actions??
goal test??
path cost??
Chapter 3 18
Example: The 8-puzzle
2
Start State Goal State
51 3
4 6
7 8
5
1
2
3
4
6
7
8
5
states??: integer locations of tiles (ignore intermediate positions)
actions??
goal test??
path cost??
Chapter 3 19
Example: The 8-puzzle
2
Start State Goal State
51 3
4 6
7 8
5
1
2
3
4
6
7
8
5
states??: integer locations of tiles (ignore intermediate positions)
actions??: move blank left, right, up, down (ignore unjamming etc.)
goal test??
path cost??
Chapter 3 20
Example: The 8-puzzle
2
Start State Goal State
51 3
4 6
7 8
5
1
2
3
4
6
7
8
5
states??: integer locations of tiles (ignore intermediate positions)
actions??: move blank left, right, up, down (ignore unjamming etc.)
goal test??: = goal state (given)
path cost??
Chapter 3 21
Example: The 8-puzzle
2
Start State Goal State
51 3
4 6
7 8
5
1
2
3
4
6
7
8
5
states??: integer locations of tiles (ignore intermediate positions)
actions??: move blank left, right, up, down (ignore unjamming etc.)
goal test??: = goal state (given)
path cost??: 1 per move
[Note: optimal solution of n-Puzzle family is NP-hard]
Chapter 3 22
Example: robotic assembly
R
RR
P
R R
states??: real-valued coordinates of robot joint angles
parts of the object to be assembled
actions??: continuous motions of robot joints
goal test??: complete assembly with no robot included!
path cost??: time to execute
Chapter 3 23
Tree search algorithms
Basic idea:
offline, simulated exploration of state space
by generating successors of already-explored states
(a.k.a. expanding states)
function Tree-Search(problem, strategy) returns a solution, or failure
initialize the search tree using the initial state of problem
loop do
if there are no candidates for expansion then return failure
choose a leaf node for expansion according to strategy
if the node contains a goal state then return the corresponding solution
else expand the node and add the resulting nodes to the search tree
end
Chapter 3 24
Tree search example
Arad
Chapter 3 25
Tree search example
LugojArad AradArad
Arad
Chapter 3 26
Tree search example
Timisoara
Chapter 3 27
Implementation: states vs. nodes
A state is a (representation of) a physical configuration
A node is a data structure constituting part of a search tree
includes parent, children, depth, path cost g(x)
States do not have parents, children, depth, or path cost!
1
23
45
6
7
81
23
45
6
7
8
State Node depth = 6
g = 6
state
parent, action
The Expand function creates new nodes, filling in the various fields and
using the SuccessorFn of the problem to create the corresponding states.
Chapter 3 28
Implementation: general tree search
Chapter 3 29
function Tree-Search(problem, fringe) returns a solution, or failure
fringe← Insert(Make-Node(Initial-State[problem]), fringe)
loop do
if fringe is empty then return failure
node←Remove-Front(fringe)
if Goal-Test(problem,State(node)) then return node
fringe← InsertAll(Expand(node,problem), fringe)
function Expand(node, problem) returns a set of nodes
successors← the empty set
for each action, result in Successor-Fn(problem,State[node]) do
s← a new Node
Parent-Node[s]← node; Action[s]← action; State[s]← result
Path-Cost[s]←Path-Cost[node] + Step-Cost(State[node],action,
result)
Depth[s]←Depth[node] + 1
add s to successors
return successors
Chapter 3 30
Search strategies
A strategy is defined by picking the order of node expansion
Strategies are evaluated along the following dimensions:
completeness—does it always find a solution if one exists?
time complexity—number of nodes generated/expanded
space complexity—maximum number of nodes in memory
optimality—does it always find a least-cost solution?
Time and space complexity are measured in terms of
b—maximum branching factor of the search tree
d—depth of the least-cost solution
m—maximum depth of the state space (may be ∞)
Chapter 3 31
Uninformed search strategies
Uninformed strategies use only the information available
in the problem definition
Breadth-first search
Uniform-cost search
Depth-first search
Depth-limited search
Iterative deepening search
Chapter 3 32
Breadth-first search
Expand shallowest unexpanded node
Implementation:
fringe is a FIFO queue, i.e., new successors go at end
A
B C
D E F G
Chapter 3 33
Breadth-first search
Expand shallowest unexpanded node
Implementation:
fringe is a FIFO queue, i.e., new successors go at end
A
B C
D E F G
Chapter 3 34
Breadth-first search
Expand shallowest unexpanded node
Implementation:
fringe is a FIFO queue, i.e., new successors go at end
A
B C
D E F G
Chapter 3 35
Breadth-first search
Expand shallowest unexpanded node
Implementation:
fringe is a FIFO queue, i.e., new successors go at end
A
B C
D E F G
Chapter 3 36
Properties of breadth-first search
Complete??
Chapter 3 37
Properties of breadth-first search
Complete?? Yes (if b is finite)
Time??
Chapter 3 38
Properties of breadth-first search
Complete?? Yes (if b is finite)
Time?? 1 + b + b2 + b3 + . . . + bd + b(bd − 1) = O(bd+1), i.e., exp. in d
Space??
Chapter 3 39
Properties of breadth-first search
Complete?? Yes (if b is finite)
Time?? 1 + b + b2 + b3 + . . . + bd + b(bd − 1) = O(bd+1), i.e., exp. in d
Space?? O(bd+1) (keeps every node in memory)
Optimal??
Chapter 3 40
Properties of breadth-first search
Complete?? Yes (if b is finite)
Time?? 1 + b + b2 + b3 + . . . + bd + b(bd − 1) = O(bd+1), i.e., exp. in d
Space?? O(bd+1) (keeps every node in memory)
Optimal?? Yes (if cost = 1 per step); not optimal in general
Space is the big problem; can easily generate nodes at 100MB/sec
so 24hrs = 8640GB.
Chapter 3 41
Uniform-cost search
Expand least-cost unexpanded node
Implementation:
fringe = queue ordered by path cost, lowest first
Equivalent to breadth-first if step costs all equal
Complete?? Yes, if step cost ≥ ǫ
Time?? # of nodes with g ≤ cost of optimal solution, O(b⌈C
∗/ǫ⌉)
where C∗ is the cost of the optimal solution
Space?? # of nodes with g ≤ cost of optimal solution, O(b⌈C
∗/ǫ⌉)
Optimal?? Yes—nodes expanded in increasing order of g(n)
Chapter 3 42
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
A
B C
D E F G
H I J K L M N O
Chapter 3 43
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
A
B C
D E F G
H I J K L M N O
Chapter 3 44
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
A
B C
D E F G
H I J K L M N O
Chapter 3 45
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
A
B C
D E F G
H I J K L M N O
Chapter 3 46
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
A
B C
D E F G
H I J K L M N O
Chapter 3 47
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
A
B C
D E F G
H I J K L M N O
Chapter 3 48
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
A
B C
D E F G
H I J K L M N O
Chapter 3 49
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
A
B C
D E F G
H I J K L M N O
Chapter 3 50
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
A
B C
D E F G
H I J K L M N O
Chapter 3 51
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
A
B C
D E F G
H I J K L M N O
Chapter 3 52
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
A
B C
D E F G
H I J K L M N O
Chapter 3 53
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
A
B C
D E F G
H I J K L M N O
Chapter 3 54
Properties of depth-first search
Complete??
Chapter 3 55
Properties of depth-first search
Complete?? No: fails in infinite-depth spaces, spaces with loops
Modify to avoid repeated states along path
⇒ complete in finite spaces
Time??
Chapter 3 56
Properties of depth-first search
Complete?? No: fails in infinite-depth spaces, spaces with loops
Modify to avoid repeated states along path
⇒ complete in finite spaces
Time?? O(bm): terrible if m is much larger than d
but if solutions are dense, may be much faster than breadth-first
Space??
Chapter 3 57
Properties of depth-first search
Complete?? No: fails in infinite-depth spaces, spaces with loops
Modify to avoid repeated states along path
⇒ complete in finite spaces
Time?? O(bm): terrible if m is much larger than d
but if solutions are dense, may be much faster than breadth-first
Space?? O(bm), i.e., linear space!
Optimal??
Chapter 3 58
Properties of depth-first search
Complete?? No: fails in infinite-depth spaces, spaces with loops
Modify to avoid repeated states along path
⇒ complete in finite spaces
Time?? O(bm): terrible if m is much larger than d
but if solutions are dense, may be much faster than breadth-first
Space?? O(bm), i.e., linear space!
Optimal?? No
Chapter 3 59
Depth-limited search
= depth-first search with depth limit l,
i.e., nodes at depth l have no successors
Recursive implementation:
function Depth-Limited-Search(problem, limit) returns soln/fail/cutoff
Recursive-DLS(Make-Node(Initial-State[problem]),problem, limit)
function Recursive-DLS(node,problem, limit) returns soln/fail/cutoff
cutoff-occurred?← false
if Goal-Test(problem,State[node]) then return node
else if Depth[node] = limit then return cutoff
else for each successor in Expand(node,problem) do
result←Recursive-DLS(successor,problem, limit)
if result = cutoff then cutoff-occurred?← true
else if result 6= failure then return result
if cutoff-occurred? then return cutoff else return failure
Chapter 3 60
Iterative deepening search
function Iterative-Deepening-Search(problem) returns a solution
inputs: problem, a problem
for depth← 0 to ∞ do
result←Depth-Limited-Search(problem, depth)
if result 6= cutoff then return result
end
Chapter 3 61
Iterative deepening search l = 0
Limit = 0 A A
Chapter 3 62
Iterative deepening search l = 1
Limit = 1 A
B C
A
B C
A
B C
A
B C
Chapter 3 63
Iterative deepening search l = 2
Limit = 2 A
B C
D E F G
A
B C
D E F G
A
B C
D E F G
A
B C
D E F G
A
B C
D E F G
A
B C
D E F G
A
B C
D E F G
A
B C
D E F G
Chapter 3 64
Iterative deepening search l = 3
Limit = 3
A
B C
D E F G
H I J K L M N O
A
B C
D E F G
H I J K L M N O
A
B C
D E F G
H I J K L M N O
A
B C
D E F G
H I J K L M N O
A
B C
D E F G
H I J K L M N O
A
B C
D E F G
H I J K L M N O
A
B C
D E F G
H I J K L M N O
A
B C
D E F G
H I J K L M N O
A
B C
D E F G
H I J K L M N O
A
B C
D E F G
H I J K L M N O
A
B C
D E F G
H J K L M N OI
A
B C
D E F G
H I J K L M N O
Chapter 3 65
Properties of iterative deepening search
Complete??
Chapter 3 66
Properties of iterative deepening search
Complete?? Yes
Time??
Chapter 3 67
Properties of iterative deepening search
Complete?? Yes
Time?? (d + 1)b0 + db1 + (d− 1)b2 + . . . + bd = O(bd)
Space??
Chapter 3 68
Properties of iterative deepening search
Complete?? Yes
Time?? (d + 1)b0 + db1 + (d− 1)b2 + . . . + bd = O(bd)
Space?? O(bd)
Optimal??
Chapter 3 69
Properties of iterative deepening search
Complete?? Yes
Time?? (d + 1)b0 + db1 + (d− 1)b2 + . . . + bd = O(bd)
Space?? O(bd)
Optimal?? Yes, if step cost = 1
Can be modified to explore uniform-cost tree
Numerical comparison for b = 10 and d = 5, solution at far right leaf:
N(IDS) = 50 + 400 + 3, 000 + 20, 000 + 100, 000 = 123, 450
N(BFS) = 10 + 100 + 1, 000 + 10, 000 + 100, 000 + 999, 990 = 1, 111, 100
IDS does better because other nodes at depth d are not expanded
BFS can be modified to apply goal test when a node is generated
Chapter 3 70
Summary of algorithms
Criterion Breadth- Uniform- Depth- Depth- Iterative
First Cost First Limited Deepening
Complete? Yes∗ Yes∗ No Yes, if l ≥ d Yes
Time bd+1 b⌈C
∗/ǫ⌉ bm bl bd
Space bd+1 b⌈C
∗/ǫ⌉ bm bl bd
Optimal? Yes∗ Yes No No Yes∗
Chapter 3 71
Repeated states
Failure to detect repeated states can turn a linear problem into an exponential
one!
A
B
C
D
A
BB
CCCC
Chapter 3 72
Graph search
function Graph-Search(problem, fringe) returns a solution, or failure
closed← an empty set
fringe← Insert(Make-Node(Initial-State[problem]), fringe)
loop do
if fringe is empty then return failure
node←Remove-Front(fringe)
if Goal-Test(problem,State[node]) then return node
if State[node] is not in closed then
add State[node] to closed
fringe← InsertAll(Expand(node,problem), fringe)
end
Chapter 3 73
Summary
Problem formulation usually requires abstracting away real-world details to
define a state space that can feasibly be explored
Variety of uninformed search strategies
Iterative deepening search uses only linear space
and not much more time than other uninformed algorithms
Graph search can be exponentially more efficient than tree search
Chapter 3 74