Introduction to AI -Search-
Introduction to AI
-Search-
Francesca Toni
Outline
• Solving problems by search
• Basic search algorithms (uninformed/blind search)
• Heuristic-based search algorithms (informed search)
• Search for games (adversarial search )
Recommended reading:
(most of) Chapters 3, 5
Additional reading:
Chapter 3
Section 11.2.2
2
Solving problems by search
•Problem solution can be abstracted as path from
some (start) node to some (goal) node in a suitably
defined directed graph
• Search algorithms to find (optimal) paths
3
Route finding example
On holiday in Romania; today in Arad; flight leaves tomorrow from Bucharest
• Start: Arad
• Goal: Bucharest
• Graph
• Optimal path
• shortest, or
• quickest, or
• cheapest
4
Vacuum world example
The vacuum cleaner needs to remove all dirt from the room
• Start: A
• Goal: no dirt anywhere in A, B
• Graph
• Optimal path
• shortest, or
• quickest, or
• cheapest
5
8-puzzle example
• Graph?• Optimal path?
6
Which problems can be solved by (classical) search?
Environment is
• Observable
• Current state known
• Discrete
• From each state finitely many next states (by executing actions)
• Deterministic
• Each action has exactly one outcome (next state)
• Known
• Possible actions and next states for each state
7
Search problems – formally
• Initial state (start) e.g. In(Arad)
• Transition model between states (graph)
possibly given by successor function RESULT: States×Actions → States
e.g. RESULT (In(Arad), Go(Zerind))=In(Zerind)
• Goal test (set of goal states)
• explicit e.g. In(Bucharest), or
• implicit e.g. ∀𝑥 ¬Dirt(𝑥)
• Path cost (additive = sum of positive step costs )
e.g. sum of distances, number of actions
Solution=path (sequence of actions) from the initial state to a goal state
Optimal solution= solution with lowest path cost
8
Vacuum world example – formally
• Initial state= In(A)∧ 𝐷𝑖𝑟𝑡(𝐴) ∧ 𝐷𝑖𝑟𝑡(𝐵)
• Goal test= {¬Dirt(𝐴) ∧ ¬𝐷𝑖𝑟𝑡(𝐵)}
• RESULT (In(A)∧ 𝐷𝑖𝑟𝑡 𝐴 ∧ 𝐷𝑖𝑟𝑡 𝐵 , 𝑆) = In(A)∧ ¬𝐷𝑖𝑟𝑡 𝐴 ∧ 𝐷𝑖𝑟𝑡 𝐵 , 𝑒𝑡𝑐
• Step cost=1, path cost=number of actions
9
8-puzzle example – formally
• Initial state = Start – e.g. represented by integer locations of tiles
• Transition model between states?
(successor function RESULT: States×Actions → States ?)
• Goal test ={Goal} – e.g. represented by integer locations of tiles
• Path cost= number of actions
10
Search algorithms – basic idea
Generate a (search) tree:
1. Initialise the tree with the initial state as root (=frontier)
2. Repeat
i. if the frontier of the tree is empty then return fail
ii. choose and remove a leaf node L from the frontier
iii. if L is a goal state (in the goal test) then return corresponding solution
iv. expand L, adding the resulting nodes to the frontier
11
12
13
14
Search algorithms – basic idea with loop checking
Generate a (search) tree:
1. Initialise the tree with the initial state as root
2. Repeat
i. if the frontier of the tree is empty then return fail
ii. choose and remove a leaf node L from the frontier
iii. if L is a goal state (in the goal test) then return corresponding solution
*add L to the “explored set”
iv. expand L, adding the resulting nodes to the frontier
*only if not in the frontier or the “explored set” already
* Additions to avoid loops
15
Search algorithms – concretely
Definition of choose
16
Uninformed/blind search
• Breadth-first search
• choose (a) shallowest (closest to root) unexpanded node
• Uniform-cost search
• choose unexpanded node with lowest path cost
• Depth-first search
• choose deepest (farthest from the root) unexpanded node.
• Depth-limited search
• depth-first search with given depth limit l (nodes of depth l have no children)
• Iterative deepening search
• depth-limited search of increasing l
17
• Breadth-first (choose shallowest)
18
…
• Uniform-cost (choose cheapest) all costs equal
…
= Breadth-first
19
• Uniform-cost (choose cheapest) any costs
Sibiu
Sibiu
R.V.
Faga
ras
80 99
Sibiu
R.V.
Faga
ras
80
177
Pites
ti
99
…
20
• Depth-first (choose deepest)
…
• Depth-limited (l=1)
• Iterative deepening
Depth-limited (l=0) Depth-limited (l=1) Depth-limited (l=2) …
Uninformed search – Variants
• Backtracking search – variant of depth-first search
• Only one (successor) node generated when expanded
• Each (partially expanded) successor node remembers which node to generate next
• Bi-directional search
• Two simultaneous searches (from start to goal and from goal to start)
• Succeeds when (if) the frontiers of the two searches intersect
21
Properties
• completeness—does the algorithm always find a
solution if one exists?
• time complexity—number of nodes
generated/expanded
• space complexity—maximum number of nodes in
memory
•optimality—does the algorithm always find an optimal
(least-cost) solution?
22
Properties of uninformed search
(with loop-checking)
23
Criterion Breadth-first Uniform-cost Depth-first Backtracking Depth-limited Iterative
deepening
Bidirectional
(breadth-first)
Complete? Yes* Yes*○ Yes† Yes† No Yes* Yes*
Time O(bd) O(bd+1) ● s s s(l) O(bd) O(bd/2)
Space O(bd) O(bd+1) ● O(bm) O(m) O(bl) O(bd) O(bd/2)
Optimal? Yes● Yes No No No Yes● Yes●
* b is finite
● step cost is constant
○ step cost is >0
† s is finite
b–maximum branching factor of the search tree (may be ∞)
d–depth of the optimal (least-cost) solution
m–maximum depth of the state space (may be ∞)
s–overall size of the state space
s(l) –size of the state space till depth l
Properties of uninformed search
(without loop-checking)
24
Criterion Breadth-first Uniform-cost Depth-first Backtracking Depth-limited Iterative
deepening
Bi-directional
(breadth-first)
Complete? Yes* Yes*○ No No No Yes* Yes*
Time O(bd) O(bd+1) ● O(bm) O(bm) O(bl) O(bd) O(bd/2)
Space O(bd) O(bd+1) ● O(bm) O(m) O(bl) O(bd) O(bd/2)
Optimal? Yes● Yes No No No Yes● Yes●
* b is finite
● step cost is constant
○ step cost is >0
b–maximum branching factor of the search tree (may be ∞)
d–depth of the optimal (least-cost) solution
m–maximum depth of the state space (may be ∞)
Informed (heuristic-based) search
Use a cost estimate (of optimal path to goal) to
choose node with the least-estimated path cost
• Greedy-best-first search
• cost estimate=heuristic function(node to goal)
• A* search
• cost estimate=actual cost(to node)+heuristic function(node-to-goal)
25
Heuristic function: route finding example
26
Heuristic function=
27
• Greedy-best-first (choose node with lowest value of heuristic function)
28
• A* (choose node with lowest value of actual cost+heuristic function)
29
Properties of heuristic functions
For 𝑛 any node, let
𝑔(𝑛,𝑚) be the actual cost of reaching node 𝑚 from 𝑛
ℎ 𝑛 be the heuristic function applied to 𝑛 (assume ℎ 𝑛 ≥ 0 — so that ℎ 𝑔𝑜𝑎𝑙 = 0)
• Admissible: it never overestimates the actual cost (to goal) – e.g. straight-line distance
ℎ 𝑛 ≤ 𝑔(𝑛,𝑚𝑔𝑜𝑎𝑙) for 𝑚𝑔𝑜𝑎𝑙 the cheapest goal reachable from 𝑛
• Consistent/Monotonic: it never overestimates the actual cost to any successor
node+the heuristic function applied to that node – e.g. straight-line distance
ℎ 𝑛 ≤ 𝑔 𝑛, 𝑛′ + ℎ(𝑛′) for 𝑛′ any successor of 𝑛
30
Consistent/Monotonic → Admissible
Properties of informed search
(with or without loop-checking)
31
Criterion Greedy (with) Greedy (without) A* (with) A* (without)
Complete? Yes† No Yes*○ Yes*○
Optimal? No No Yes ◊ Yes ‡
† s is finite
* b is finite
○ step cost is >0
◊ h consistent/monotonic
‡ h admissible
s–overall size of the state space
b–maximum branching factor of the search tree (may be ∞)
m–maximum depth of the state space (may be ∞)
h–heuristic function
A* is optimally efficient (no other optimal search algorithm is
guaranteed to expand fewer nodes) but it often runs out of space
• Greedy best-first
search not
optimal:
Path via Sibiu-
Fagaras is (32 km)
longer than path
through RV-Pitesti
32
• Greedy best-first search without loop checking is not complete:
Iasi → Neamt → Iasi → Neamt →…
How to identify admissible/consistent
heuristics?
• Identify a relaxed version of the search problem (with a
larger graph – with more edges)
• Use cost of optimal solutions for the relaxed problem as
(admissible/consistent) heuristics for the original search
problem
33
8-puzzle example – relaxed problems
Relaxed problems:
1. A tile can move from A to B if A is horizontally/vertically adjacent to B
2. A tile can move from A to B if B is blank
3. A tile can move from (any) A to (any) B
Heuristics:
1. ℎ 𝑛 =Manhattan distance (sum of the distances of tiles as in 𝑛 from their goal positions)
2. ℎ 𝑛 = number of switches (blank-non-blank tile) to obtain goal from 𝑛
3. ℎ 𝑛 = number of misplaced tiles in 𝑛 wrt the goal
34
A tile can move from square A to square B if
• A is horizontally/vertically adjacent to B, and
• B is blank
Heuristic search – today (example-non-examinable)
35
Omitted
• Local search
• Genetic algorithms
• Simulated annealing
• Non-deterministic , partially observable or unknown environments
36
Adversarial search
•Competitive environment
• opponents with conflicting goals
• “Unpredictable” opponent
• solution is a strategy (=specifying a move for every possible
opponent’s move/sequence of moves)
•Hard to solve, time limits
• must approximate
37
Types of games
38
Adversarial search problems – formally
Two-player, zero-sum (constant-sum) games
• s0 : initial state
• PLAYER(s): determines which player (of two) moves in a state
• ACTIONS(s): returns the set of legal moves in a state
• RESULT(s,a): returns the state resulting from a move in a state
• TERMINAL(s): is true if the game has ended, false otherwise.
• UTILITY(s,p): returns 1 (win), -1 (lose), 0(draw)
or 1 (win), 0 (lose), 1/2 (draw)
39
Example of two-player, zero-sum game: tic-tac-toe
40
s0
PLAYERS
Perfect play for deterministic, perfect-info games
•Minimax strategy
•α-β pruning
41
MINIMAX(𝑠) – best payoff against best play
for ¬TERMINAL(𝑠):
• if PLAYER(𝑠)=MAX: return maxactions 𝑎 MINIMAX(RESULT(𝑠,𝑎))
• if PLAYER(𝑠)=MIN: return minactions 𝑎 MINIMAX(RESULT(𝑠,𝑎))
for TERMINAL(𝑠):
• return UTILITY(𝑠)
42
e.g. two-ply game
UTILITY (for MAX)
Properties of minimax
43
* If search tree finite
b–maximum branching factor of the search tree (may be ∞)
m–maximum depth of the state space (may be ∞)
Criterion minimax
Complete? Yes*
Time O(bm)
Space O(bm)
Optimal? Yes
For chess, b ≈ 35, m ≈100 for “reasonable” games: not feasible
α-β pruning
• α minimum value that MAX is guaranteed to achieve so far (in some state)
• β maximum value that MAX is guaranteed to achieve so far (in some state)
Determine for initial state/node 𝑠: max-value(𝑠,-∞,+∞) where
44
max-value(𝑠,α,β):
• if TERMINAL(𝑠) then
return UTILITY(𝑠)
• else, starting from v= -∞,
• for each action a:
• let v=max(v,min-value(RESULT(s,a),α,β))
• If v≥ β then return v
• else let α=max(α,v)
return v
min-value(𝑠,α,β):
…
min-value
… +∞
…
…min(v,max-value(RESULT(s,a),α,β))
If v≤α then return v
else let β =min(β,v)
…
α-β pruning
45
[α,β]
2≤ 𝟑
Properties of α-β pruning
46
* If search tree finite
□ if successors are examined best-first
b–maximum branching factor of the search tree (may be ∞)
m–maximum depth of the state space (may be ∞)
Criterion minimax
Complete? Yes*
Time O(bm/2) □
Space O(bm)
Optimal? Yes
For chess, b ≈ 35, m ≈100 for “reasonable” games: still slow
Omitted
Resource limits:
• Cutoff-Test instead of TERMINAL
• Evaluation instead of UTILITY
• for chess weighted sum of features (e.g. number of white
queens-number of black queens….)
Other types of games
47
Search&Games – today (example – non-examinable)
48
Search – summary
• Formulation of search problems
• Uninformed search algorithms: breadth-first, uniform-cost, depth-
first, limited-depth, iterative deepening, backtracking, bidirectional
• Informed (heuristic-based) search algorithms: greedy-best-first, A*,
admissible and consistent/monotonic heuristic functions
• Adversarial search: minimax, α-β pruning
• Properties of search algorithms: completeness, optimality, time and
space complexity
49