程序代写代做代考 python algorithm data structure Introduction to

Introduction to
Artificial Intelligence
with Python

Artificial Intelligence

X
Search
O
X OX

Knowledge
P→Q P
Q

Uncertainty

Optimization

Inbox
Learning
Spam

Neural Networks

NP
PP
ADJ N P N
artificial with intelligence python
NP
Language

Search

1234 5678 9 101112
13 14 15

Search Problems

agent
entity that perceives its environment and acts upon that environment

state
a configuration of the agent and its environment

2457
8 3 1 11 146 10 9 131512
12 9 4 2 8 7 3 14 1 6 11
5 13 10 15
15 4 10 3 13 1 11 12
9 5 14 7 6 8 2

initial state
the state in which the agent begins

2457
8 3 1 11 146 10 9 131512
initial state

actions
choices that can be made in a state

actions
ACTIONS(s) returns the set of actions that can be executed in state s

actions
12
3
4

transition model
a description of what state results from performing any applicable action in any state

transition model
RESULT(s, a) returns the state resulting from performing action a in state s

2457
8 3 1 11 14 6 1012
9 13 15
RESULT(
, ) =
2457
8 3 1 11 14 6 1012 913 15
2457 2457
RESULT( 8 3 1 11 , ) = 8 3 1 11 14 6 10 12 14 6 10
9 1315
9 131512

transition model
2457
8 3 1 11 14 6 1012
9 13 15
RESULT( ,
) =
2457
8 3 1 11 14 6 10
9 131512

state space
the set of all states reachable from the initial state by any sequence of actions

2457
8 3 1 11 14 6 1012
9 13 15
2457 2457
8 3 1 11 14 6 10
9 13 15 12 2457 2457 2457 2457
8 3 1 11 14 6 10 12 9 13 15
8 3 1 11 14 6 10 12 9 1315
8 3 1 11 14 6 12 9 131015
8 3 1 11 14 6 10 9 131512
8 3 1
14 6 10 11
9 131512

goal test
way to determine whether a given state is a goal state

path cost
numerical cost associated with a given path

A CD
E
I
J
B
F
K
L
G
H
M

A
4
2 C1D
B
2
G
3
L
5 6
E
3
I
J
1 2
M
2
F
3
4
K
H
4
2

A
1
1 C1D
B
1
G
1
L
1 1
E
1
I
J
1 1
M
1
F
1
1
K
H
1
1

Search Problems
• • • • •
initial state actions transition model goal test
path cost function

solution
a sequence of actions that leads from the initial state to a goal state

optimal solution
a solution that has the lowest path cost among all solutions

node
a data structure that keeps track of
– – – –
a state
a parent (node that generated this node)
an action (action applied to parent to get node) a path cost (from initial state to node)

Approach

Start with a frontier that contains the initial state. Repeat:

If the frontier is empty, then no solution.
• • • •
Remove a node from the frontier.
If node contains goal state, return the solution.
Expand node, add resulting nodes to the frontier.

Find a path from A to E.
A
B
Frontier

Start with a frontier that contains the initial state. Repeat:
C
D

If the frontier is empty, then no solution.
Remove a node from the frontier.
If node contains goal state, return the solution.



• Expand node, add resulting nodes to the frontier.
E
F

Find a path from A to E.
A
B
Frontier
A

Start with a frontier that contains the initial state. Repeat:
C
D

If the frontier is empty, then no solution.
Remove a node from the frontier.
If node contains goal state, return the solution.



• Expand node, add resulting nodes to the frontier.
E
F

Find a path from A to E.
A
B
Frontier

Start with a frontier that contains the initial state. Repeat:

If the frontier is empty, then no solution.



C
D
Remove a node from the frontier.
E
F
If node contains goal state, return the solution.
• Expand node, add resulting nodes to the frontier.

Find a path from A to E.
A
B
Frontier
B

Start with a frontier that contains the initial state. Repeat:
C
D

If the frontier is empty, then no solution.
Remove a node from the frontier.
If node contains goal state, return the solution.



• Expand node, add resulting nodes to the frontier.
E
F

Find a path from A to E.
A
B
Frontier

Start with a frontier that contains the initial state. Repeat:

If the frontier is empty, then no solution.



C
D
Remove a node from the frontier.
E
F
If node contains goal state, return the solution.
• Expand node, add resulting nodes to the frontier.

Find a path from A to E.
A
B
Frontier
CD

Start with a frontier that contains the initial state. Repeat:
C
D

If the frontier is empty, then no solution.
Remove a node from the frontier.
If node contains goal state, return the solution.



• Expand node, add resulting nodes to the frontier.
E
F

Find a path from A to E.
A
B
Frontier
D

Start with a frontier that contains the initial state. Repeat:

C
D

If the frontier is empty, then no solution.


Remove a node from the frontier.
E
F
If node contains goal state, return the solution.
• Expand node, add resulting nodes to the frontier.

Find a path from A to E.
A
B
Frontier
ED

Start with a frontier that contains the initial state. Repeat:
C
D

If the frontier is empty, then no solution.
Remove a node from the frontier.
If node contains goal state, return the solution.



• Expand node, add resulting nodes to the frontier.
E
F

Find a path from A to E.
A
B
Frontier
D

Start with a frontier that contains the initial state. Repeat:

C
D

If the frontier is empty, then no solution.


Remove a node from the frontier.
E
F
If node contains goal state, return the solution.
• Expand node, add resulting nodes to the frontier.

Find a path from A to E.
A
B
Frontier
D

Start with a frontier that contains the initial state. Repeat:



• Expand node, add resulting nodes to the frontier.
C
D

If the frontier is empty, then no solution.
Remove a node from the frontier.
E
F
If node contains goal state, return the solution.

What could go wrong?

Find a path from A to E.
A
B
Frontier
C
D
E
F

Find a path from A to E.
A
B
Frontier
A
C
D
E
F

Find a path from A to E.
A
B
Frontier
C
D
E
F

Find a path from A to E.
A
B
Frontier
B
C
D
E
F

Find a path from A to E.
A
B
Frontier
C
D
E
F

Find a path from A to E.
A
B
Frontier
ACD
C
D
E
F

Find a path from A to E.
A
B
Frontier
CD
C
D
E
F

Revised Approach
• • •
• • • •

Start with a frontier that contains the initial state. Start with an empty explored set.
Repeat:
If the frontier is empty, then no solution. Remove a node from the frontier.
If node contains goal state, return the solution. Add the node to the explored set.
Expand node, add resulting nodes to the frontier if they aren’t already in the frontier or the explored set.

Revised Approach
• • •


• •

Start with a frontier that contains the initial state. Start with an empty explored set.
Repeat:
If the frontier is empty, then no solution.
Remove a node from the frontier.
If node contains goal state, return the solution. Add the node to the explored set.
Expand node, add resulting nodes to the frontier if they aren’t already in the frontier or the explored set.

stack
last-in first-out data type

Find a path from A to E.
A
B
Frontier
C
D
Explored Set
E
F

Find a path from A to E.
A
B
Frontier
A
C
D
Explored Set
E
F

Find a path from A to E.
A
B
Frontier
C
D
Explored Set
A
E
F

Find a path from A to E.
A
B
Frontier
B
C
D
Explored Set
A
E
F

Find a path from A to E.
A
B
Frontier
C
D
Explored Set
AB
E
F

Find a path from A to E.
A
B
Frontier
CD
C
D
Explored Set
AB
E
F

Find a path from A to E.
A
B
Frontier
C
C
D
Explored Set
ABD
E
F

Find a path from A to E.
A
B
Frontier
CF
C
D
Explored Set
ABD
E
F

Find a path from A to E.
A
B
Frontier
C
C
D
Explored Set
ABDF
E
F

Find a path from A to E.
A
B
Frontier
C
D
Explored Set
ABDFC
E
F

Find a path from A to E.
A
B
Frontier
E
C
D
Explored Set
ABDFC
E
F

Find a path from A to E.
A
B
Frontier
C
D
Explored Set
ABDFC
E
F

Depth-First Search

depth-first search
search algorithm that always expands the deepest node in the frontier

Breadth-First Search

breadth-first search
search algorithm that always expands the shallowest node in the frontier

queue
first-in first-out data type

Find a path from A to E.
A
B
Frontier
C
D
Explored Set
E
F

Find a path from A to E.
A
B
Frontier
A
C
D
Explored Set
E
F

Find a path from A to E.
A
B
Frontier
C
D
Explored Set
A
E
F

Find a path from A to E.
A
B
Frontier
B
C
D
Explored Set
A
E
F

Find a path from A to E.
A
B
Frontier
C
D
Explored Set
AB
E
F

Find a path from A to E.
A
B
Frontier
CD
C
D
Explored Set
AB
E
F

Find a path from A to E.
A
B
Frontier
D
C
D
Explored Set
ABC
E
F

Find a path from A to E.
A
B
Frontier
DE
C
D
Explored Set
ABC
E
F

Find a path from A to E.
A
B
Frontier
E
C
D
Explored Set
ABCD
E
F

Find a path from A to E.
A
B
Frontier
EF
C
D
Explored Set
ABCD
E
F

Find a path from A to E.
A
B
Frontier
F
C
D
Explored Set
ABCD
E
F

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Depth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

Breadth-First Search
B
A

uninformed search
search strategy that uses no problem- specific knowledge

informed search
search strategy that uses problem-specific knowledge to find solutions more efficiently

greedy best-first search
search algorithm that expands the node that is closest to the goal, as estimated by a heuristic function h(n)

Heuristic function?
B
A

Heuristic function?
B
D
C
A

Heuristic function? Manhattan distance.
C
D
B
A

Greedy Best-First Search
11
9
7
3
2
B
12
10
8
7
6
4
1
13
12
11
9
7
6
5
2
13
10
8
6
3
14
13
12
11
9
7
6
5
4
13
10
A
16
15
14
11
10
9
8
7
6

Greedy Best-First Search
11
9
7
3
2
B
12
10
8
7
6
4
1
13
12
11
9
7
6
5
2
13
10
8
6
3
14
13
12
11
9
7
6
5
4
13
10
A
16
15
14
11
10
9
8
7
6

Greedy Best-First Search
11
9
7
3
2
B
12
10
8
7
6
4
1
13
12
11
9
7
6
5
2
13
10
8
6
3
14
13
12
11
9
7
6
5
4
13
10
A
16
15
14
11
10
9
8
7
6

Greedy Best-First Search
11
9
7
3
2
B
12
10
8
7
6
4
1
13
12
11
9
7
6
5
2
13
10
8
6
3
14
13
12
11
9
7
6
5
4
13
10
A
16
15
14
11
10
9
8
7
6

Greedy Best-First Search
11
9
7
3
2
B
12
10
8
7
6
4
1
13
12
11
9
7
6
5
2
13
10
8
6
3
14
13
12
11
9
7
6
5
4
13
10
A
16
15
14
11
10
9
8
7
6

Greedy Best-First Search
11
9
7
3
2
B
12
10
8
7
6
4
1
13
12
11
9
7
6
5
2
13
10
8
6
3
14
13
12
11
9
7
6
5
4
13
10
A
16
15
14
11
10
9
8
7
6

Greedy Best-First Search
11
9
7
3
2
B
12
10
8
7
6
4
1
13
12
11
9
7
6
5
2
13
10
8
6
3
14
13
12
11
9
7
6
5
4
13
10
A
16
15
14
11
10
9
8
7
6

Greedy Best-First Search
11
9
7
3
2
B
12
10
8
7
6
4
1
13
12
11
9
7
6
5
2
13
10
8
6
3
14
13
12
11
9
7
6
5
4
13
10
A
16
15
14
11
10
9
8
7
6

Greedy Best-First Search
11
9
7
3
2
B
12
10
8
7
6
4
1
13
12
11
9
7
6
5
2
13
10
8
6
3
14
13
12
11
9
7
6
5
4
13
10
A
16
15
14
11
10
9
8
7
6

Greedy Best-First Search
11
9
7
3
2
B
12
10
8
7
6
4
1
13
12
11
9
7
6
5
2
13
10
8
6
3
14
13
12
11
9
7
6
5
4
13
10
A
16
15
14
11
10
9
8
7
6

Greedy Best-First Search
11
9
7
3
2
B
12
10
8
7
6
4
1
13
12
11
9
7
6
5
2
13
10
8
6
3
14
13
12
11
9
7
6
5
4
13
10
A
16
15
14
11
10
9
8
7
6

Greedy Best-First Search
11
9
7
3
2
B
12
10
8
7
6
4
1
13
12
11
9
7
6
5
2
13
10
8
6
3
14
13
12
11
9
7
6
5
4
13
10
A
16
15
14
11
10
9
8
7
6

Greedy Best-First Search
11
9
7
3
2
B
12
10
8
7
6
4
1
13
12
11
9
7
6
5
2
13
10
8
6
3
14
13
12
11
9
7
6
5
4
13
10
A
16
15
14
11
10
9
8
7
6

Greedy Best-First Search
11
9
7
3
2
B
12
10
8
7
6
4
1
13
12
11
9
7
6
5
2
13
10
8
6
3
14
13
12
11
9
7
6
5
4
13
10
A
16
15
14
11
10
9
8
7
6

Greedy Best-First Search
11
9
7
3
2
B
12
10
8
7
6
4
1
13
12
11
9
7
6
5
2
13
10
8
6
3
14
13
12
11
9
7
6
5
4
13
10
A
16
15
14
11
10
9
8
7
6

Greedy Best-First Search
11
9
7
3
2
B
12
10
8
7
6
4
1
13
12
11
9
7
6
5
2
13
10
8
6
3
14
13
12
11
9
7
6
5
4
13
10
A
16
15
14
11
10
9
8
7
6

Greedy Best-First Search
11
9
7
3
2
B
12
10
8
7
6
4
1
13
12
11
9
7
6
5
2
13
10
8
6
3
14
13
12
11
9
7
6
5
4
13
10
A
16
15
14
11
10
9
8
7
6

Greedy Best-First Search
11
9
7
3
2
B
12
10
8
7
6
4
1
13
12
11
9
7
6
5
2
13
10
8
6
3
14
13
12
11
9
7
6
5
4
13
10
A
16
15
14
11
10
9
8
7
6

Greedy Best-First Search
11
9
7
3
2
B
12
10
8
7
6
4
1
13
12
11
9
7
6
5
2
13
10
8
6
3
14
13
12
11
9
7
6
5
4
13
10
A
16
15
14
11
10
9
8
7
6

Greedy Best-First Search
11
9
7
3
2
B
12
10
8
7
6
4
1
13
12
11
9
7
6
5
2
13
10
8
6
3
14
13
12
11
9
7
6
5
4
13
10
A
16
15
14
11
10
9
8
7
6

Greedy Best-First Search
11
9
7
3
2
B
12
10
8
7
6
4
1
13
12
11
9
7
6
5
2
13
10
8
6
3
14
13
12
11
9
7
6
5
4
13
10
A
16
15
14
11
10
9
8
7
6

Greedy Best-First Search
11
9
7
3
2
B
12
10
8
7
6
4
1
13
12
11
9
7
6
5
2
13
10
8
6
3
14
13
12
11
9
7
6
5
4
13
10
A
16
15
14
11
10
9
8
7
6

Greedy Best-First Search
11
9
7
3
2
B
12
10
8
7
6
4
1
13
12
11
9
7
6
5
2
13
10
8
6
3
14
13
12
11
9
7
6
5
4
13
10
A
16
15
14
11
10
9
8
7
6

Greedy Best-First Search
11
9
7
3
2
B
12
10
8
7
6
4
1
13
12
11
9
7
6
5
2
13
10
8
6
3
14
13
12
11
9
7
6
5
4
13
10
A
16
15
14
11
10
9
8
7
6

Greedy Best-First Search
11
9
7
3
2
B
12
10
8
7
6
4
1
13
12
11
9
7
6
5
2
13
10
8
6
3
14
13
12
11
9
7
6
5
4
13
10
A
16
15
14
11
10
9
8
7
6

Greedy Best-First Search
11
9
7
3
2
B
12
10
8
7
6
4
1
13
12
11
9
7
6
5
2
13
10
8
6
3
14
13
12
11
9
7
6
5
4
13
10
A
16
15
14
11
10
9
8
7
6

Greedy Best-First Search
11
9
7
3
2
B
12
10
8
7
6
4
1
13
12
11
9
7
6
5
2
13
10
8
6
3
14
13
12
11
9
7
6
5
4
13
10
A
16
15
14
11
10
9
8
7
6

Greedy Best-First Search
10
9
8
7
6
5
4
3
2
1
B
11
1
12
10
9
8
7
6
5
4
2
13
11
5
3
14
13
12
10
9
8
7
6
4
13
11
5
A
16
15
14
12
11
10
9
8
7
6

Greedy Best-First Search
10
9
8
7
6
5
4
3
2
1
B
11
1
12
10
9
8
7
6
5
4
2
13
11
5
3
14
13
12
10
9
8
7
6
4
13
11
5
A
16
15
14
12
11
10
9
8
7
6

Greedy Best-First Search
10
9
8
7
6
5
4
3
2
1
B
11
1
12
10
9
8
7
6
5
4
2
13
11
5
3
14
13
12
10
9
8
7
6
4
13
11
5
A
16
15
14
12
11
10
9
8
7
6

A* search
search algorithm that expands node with lowest value of g(n) + h(n)
g(n) = cost to reach node h(n) = estimated cost to goal

A* Search
10
9
8
7
6
5
4
3
2
1
B
11
1
12
10
9
8
7
6
5
4
2
13
11
5
3
14
13
12
10
9
8
7
6
4
13
11
5
A
16
15
14
12
11
10
9
8
7
6

A* Search
10
9
8
7
6
5
4
3
2
1
B
11
1
12
10
9
8
7
6
5
4
2
13
11
5
3
14
13
12
10
9
8
7
6
4
13
11
5
A
1+16
15
14
12
11
10
9
8
7
6

A* Search
10
9
8
7
6
5
4
3
2
1
B
11
1
12
10
9
8
7
6
5
4
2
13
11
5
3
14
13
12
10
9
8
7
6
4
13
11
5
A
1+16
2+15
14
12
11
10
9
8
7
6

A* Search
10
9
8
7
6
5
4
3
2
1
B
11
1
12
10
9
8
7
6
5
4
2
13
11
5
3
14
13
12
10
9
8
7
6
4
13
11
5
A
1+16
2+15
3+14
12
11
10
9
8
7
6

A* Search
10
9
8
7
6
5
4
3
2
1
B
11
1
12
10
9
8
7
6
5
4
2
13
11
5
3
14
13
12
10
9
8
7
6
4
4+13
11
5
A
1+16
2+15
3+14
12
11
10
9
8
7
6

A* Search
10
9
8
7
6
5
4
3
2
1
B
11
1
12
10
9
8
7
6
5
4
2
13
11
5
3
14
13
5+12
10
9
8
7
6
4
4+13
11
5
A
1+16
2+15
3+14
12
11
10
9
8
7
6

A* Search
10
9
8
7
6
5
4
3
2
1
B
11
1
12
10
9
8
7
6
5
4
2
13
6+11
5
3
14
13
5+12
10
9
8
7
6
4
4+13
11
5
A
1+16
2+15
3+14
12
11
10
9
8
7
6

A* Search
10
9
8
7
6
5
4
3
2
1
B
11
1
12
7+10
9
8
7
6
5
4
2
13
6+11
5
3
14
13
5+12
10
9
8
7
6
4
4+13
11
5
A
1+16
2+15
3+14
12
11
10
9
8
7
6

A* Search
10
9
8
7
6
5
4
3
2
1
B
11
1
12
7+10
8+9
8
7
6
5
4
2
13
6+11
5
3
14
13
5+12
10
9
8
7
6
4
4+13
11
5
A
1+16
2+15
3+14
12
11
10
9
8
7
6

A* Search
10
9
8
7
6
5
4
3
2
1
B
11
1
12
7+10
8+9
9+8
7
6
5
4
2
13
6+11
5
3
14
13
5+12
10
9
8
7
6
4
4+13
11
5
A
1+16
2+15
3+14
12
11
10
9
8
7
6

A* Search
10
9
8
7
6
5
4
3
2
1
B
11
1
12
7+10
8+9
9+8
10+7
6
5
4
2
13
6+11
5
3
14
13
5+12
10
9
8
7
6
4
4+13
11
5
A
1+16
2+15
3+14
12
11
10
9
8
7
6

A* Search
10
9
8
7
6
5
4
3
2
1
B
11
1
12
7+10
8+9
9+8
10+7
11+6
5
4
2
13
6+11
5
3
14
13
5+12
10
9
8
7
6
4
4+13
11
5
A
1+16
2+15
3+14
12
11
10
9
8
7
6

A* Search
10
9
8
7
6
5
4
3
2
1
B
11
1
12
7+10
8+9
9+8
10+7
11+6
12+5
4
2
13
6+11
5
3
14
13
5+12
10
9
8
7
6
4
4+13
11
5
A
1+16
2+15
3+14
12
11
10
9
8
7
6

A* Search
10
9
8
7
6
5
4
3
2
1
B
11
1
12
7+10
8+9
9+8
10+7
11+6
12+5
13+4
2
13
6+11
5
3
14
13
5+12
10
9
8
7
6
4
4+13
11
5
A
1+16
2+15
3+14
12
11
10
9
8
7
6

A* Search
10
9
8
7
6
5
4
3
2
1
B
11
1
12
7+10
8+9
9+8
10+7
11+6
12+5
13+4
2
13
6+11
14+5
3
14
13
5+12
10
9
8
7
6
4
4+13
11
5
A
1+16
2+15
3+14
12
11
10
9
8
7
6

A* Search
10
9
8
7
6
5
4
3
2
1
B
11
1
12
7+10
8+9
9+8
10+7
11+6
12+5
13+4
2
13
6+11
14+5
3
14
6+13
5+12
10
9
8
7
6
4
4+13
11
5
A
1+16
2+15
3+14
12
11
10
9
8
7
6

A* Search
10
9
8
7
6
5
4
3
2
1
B
11
1
12
7+10
8+9
9+8
10+7
11+6
12+5
13+4
2
13
6+11
14+5
3
14
6+13
5+12
10
9
8
7
15+6
4
4+13
11
5
A
1+16
2+15
3+14
12
11
10
9
8
7
6

A* Search
10
9
8
7
6
5
4
3
2
1
B
11
1
12
7+10
8+9
9+8
10+7
11+6
12+5
13+4
2
13
6+11
14+5
3
7+14
6+13
5+12
10
9
8
7
15+6
4
4+13
11
5
A
1+16
2+15
3+14
12
11
10
9
8
7
6

A* Search
10
9
8
7
6
5
4
3
2
1
B
11
1
12
7+10
8+9
9+8
10+7
11+6
12+5
13+4
2
8+13
6+11
14+5
3
7+14
6+13
5+12
10
9
8
7
15+6
4
4+13
11
5
A
1+16
2+15
3+14
12
11
10
9
8
7
6

A* Search
10
9
8
7
6
5
4
3
2
1
B
11
1
9+12
7+10
8+9
9+8
10+7
11+6
12+5
13+4
2
8+13
6+11
14+5
3
7+14
6+13
5+12
10
9
8
7
15+6
4
4+13
11
5
A
1+16
2+15
3+14
12
11
10
9
8
7
6

A* Search
10
9
8
7
6
5
4
3
2
1
B
10+11
1
9+12
7+10
8+9
9+8
10+7
11+6
12+5
13+4
2
8+13
6+11
14+5
3
7+14
6+13
5+12
10
9
8
7
15+6
4
4+13
11
5
A
1+16
2+15
3+14
12
11
10
9
8
7
6

A* Search
11+10
9
8
7
6
5
4
3
2
1
B
10+11
1
9+12
7+10
8+9
9+8
10+7
11+6
12+5
13+4
2
8+13
6+11
14+5
3
7+14
6+13
5+12
10
9
8
7
15+6
4
4+13
11
5
A
1+16
2+15
3+14
12
11
10
9
8
7
6

A* Search
11+10
12+9
8
7
6
5
4
3
2
1
B
10+11
1
9+12
7+10
8+9
9+8
10+7
11+6
12+5
13+4
2
8+13
6+11
14+5
3
7+14
6+13
5+12
10
9
8
7
15+6
4
4+13
11
5
A
1+16
2+15
3+14
12
11
10
9
8
7
6

A* Search
11+10
12+9
13+8
7
6
5
4
3
2
1
B
10+11
1
9+12
7+10
8+9
9+8
10+7
11+6
12+5
13+4
2
8+13
6+11
14+5
3
7+14
6+13
5+12
10
9
8
7
15+6
4
4+13
11
5
A
1+16
2+15
3+14
12
11
10
9
8
7
6

A* Search
11+10
12+9
13+8
14+7
6
5
4
3
2
1
B
10+11
1
9+12
7+10
8+9
9+8
10+7
11+6
12+5
13+4
2
8+13
6+11
14+5
3
7+14
6+13
5+12
10
9
8
7
15+6
4
4+13
11
5
A
1+16
2+15
3+14
12
11
10
9
8
7
6

A* Search
11+10
12+9
13+8
14+7
15+6
5
4
3
2
1
B
10+11
1
9+12
7+10
8+9
9+8
10+7
11+6
12+5
13+4
2
8+13
6+11
14+5
3
7+14
6+13
5+12
10
9
8
7
15+6
4
4+13
11
5
A
1+16
2+15
3+14
12
11
10
9
8
7
6

A* Search
11+10
12+9
13+8
14+7
15+6
16+5
4
3
2
1
B
10+11
1
9+12
7+10
8+9
9+8
10+7
11+6
12+5
13+4
2
8+13
6+11
14+5
3
7+14
6+13
5+12
10
9
8
7
15+6
4
4+13
11
5
A
1+16
2+15
3+14
12
11
10
9
8
7
6

A* Search
11+10
12+9
13+8
14+7
15+6
16+5
17+4
3
2
1
B
10+11
1
9+12
7+10
8+9
9+8
10+7
11+6
12+5
13+4
2
8+13
6+11
14+5
3
7+14
6+13
5+12
10
9
8
7
15+6
4
4+13
11
5
A
1+16
2+15
3+14
12
11
10
9
8
7
6

A* Search
11+10
12+9
13+8
14+7
15+6
16+5
17+4
18+3
2
1
B
10+11
1
9+12
7+10
8+9
9+8
10+7
11+6
12+5
13+4
2
8+13
6+11
14+5
3
7+14
6+13
5+12
10
9
8
7
15+6
4
4+13
11
5
A
1+16
2+15
3+14
12
11
10
9
8
7
6

A* Search
11+10
12+9
13+8
14+7
15+6
16+5
17+4
18+3
19+2
1
B
10+11
1
9+12
7+10
8+9
9+8
10+7
11+6
12+5
13+4
2
8+13
6+11
14+5
3
7+14
6+13
5+12
10
9
8
7
15+6
4
4+13
11
5
A
1+16
2+15
3+14
12
11
10
9
8
7
6

A* Search
11+10
12+9
13+8
14+7
15+6
16+5
17+4
18+3
19+2
20+1
B
10+11
1
9+12
7+10
8+9
9+8
10+7
11+6
12+5
13+4
2
8+13
6+11
14+5
3
7+14
6+13
5+12
10
9
8
7
15+6
4
4+13
11
5
A
1+16
2+15
3+14
12
11
10
9
8
7
6

A* search
optimal if
– –
h(n) is admissible (never overestimates the true cost), and
h(n) is consistent (for every node n and successor n’ with step cost c, h(n) ≤ h(n’) + c)

Adversarial Search

O
X
O
X
X
O
X

O
X
O
X
X
O
X

Minimax

OXXXOXOX OOXO OXX XXO XOX
-1 0 1
O
O
X

Minimax
• •
MAX (X) aims to maximize score. MIN (O) aims to minimize score.

Game
• S0 : initial state
• • • • •
PLAYER(s) : returns which player to move in state s ACTIONS(s) : returns legal moves in state s
RESULT(s, a) : returns state after action a taken in state s TERMINAL(s) : checks if state s is a terminal state UTILITY(s) : final numerical value for terminal state s

Initial State

PLAYER(s)
PLAYER( ) =
X O
X
PLAYER( ) =

ACTIONS(s)
XO
{O,O} XO
X
ACTIONS( O X ) =

RESULT(s, a)
XOO OXO
RESULT( O X , ) = O X XO XO
X
X

TERMINAL(s) O
TERMINAL( O XOX
OX
TERMINAL( O XOX
) = ) =
false
X
true
X

UTILITY(s) OX
UTILITY( O XOX
OXX
UTILITY( X OXO
) = ) =
1
X
O
-1

OXO
OX XXO
VALUE: 1
X

PLAYER(s) = O
MIN-VALUE:
XO
OX XO
X
0
MAX-VALUE:
1
MAX-VALUE:
X O XO XOO
O X O OX0OX
X
X
X
X
X
VALUE:
1
OO XXO O X VALUE: O X
XO0XOO
X

PLAYER(s) = O
MIN-VALUE:
XO
OX XO
X
0
MAX-VALUE:
1
MAX-VALUE:
X O XO XOO
O X O OX0OX
X
X
X
X
X
VALUE:
1
OO XXO O X VALUE: O X
XO0XOO
X

MAX-VALUE:
1
XO
O XO
PLAYER(s) = X MIN-VALUE:
MAX-VALUE: O X O MAX-VALUE: X O VALUE: X X MAX-VALUE: X X O 1OX 0OX-1O 0O
XOXOOXXOO
OO XXO XXO O X VALUE: O X VALUE: O X
XO 0XOO 0XOO
X
X X O XOXOXO
X O
0OX -1O1O
MIN-VALUE:
VALUE: O
X
X
X
X
X
X
X
X
VALUE:
1
X
X
O
O
O
X
X
X
X

9
539

8
9
8
539
28

Minimax

MAX picks action a in ACTIONS(s) that produces
Given a state s:

highest value of MIN-VALUE(RESULT(s, a))

MIN picks action a in ACTIONS(s) that produces smallest value of MAX-VALUE(RESULT(s, a))

Minimax
function MAX-VALUE(state): if TERMINAL(state):
return UTILITY(state) v = -∞
for action in ACTIONS(state):
v = MAX(v, MIN-VALUE(RESULT(state, action)))
return v

Minimax
function MIN-VALUE(state): if TERMINAL(state):
return UTILITY(state) v=∞
for action in ACTIONS(state):
v = MIN(v, MAX-VALUE(RESULT(state, action)))
return v

Optimizations

4 453 2
485937246

4 45≤3 ≤2
48593 2

Alpha-Beta Pruning

255,168
total possible Tic-Tac-Toe games

288,000,000,000
total possible chess games after four moves each

10
29000
total possible chess games (lower bound)

Depth-Limited Minimax

evaluation function
function that estimates the expected utility of the game from a given state

https://xkcd.com/832/

Search

Introduction to
Artificial Intelligence
with Python