Game playing
Chapter 6
Chapter 6 1
♦ Games
♦ Perfect play
– minimax decisions
– α–β pruning
♦ Resource limits and approximate evaluation ♦ Games of chance
♦ Games of imperfect information
Outline
Chapter 6 2
Games vs. search problems
“Unpredictable” opponent ⇒ solution is a strategy specifying a move for every possible opponent reply
Time limits ⇒ unlikely to find goal, must approximate Plan of attack:
• Computer considers possible lines of play (Babbage, 1846)
• Algorithm for perfect play (Zermelo, 1912; Von Neumann, 1944)
• Finite horizon, approximate evaluation (Zuse, 1945; Wiener, 1948; Shannon, 1950)
• First chess program (Turing, 1951)
• Machine learning to improve evaluation accuracy (Samuel, 1952–57)
• Pruning to allow deeper search (McCarthy, 1956)
Chapter 6 3
perfect information
chess, checkers, go, othello
backgammon monopoly
imperfect information
battleships, blind tictactoe
bridge, poker, scrabble nuclear war
Types of games
deterministic
chance
Chapter 6 4
Game tree (2-player, deterministic, turns)
MAX (X)
MIN (O)
X X X
MAX (X)
MIN (O)
XXX
XOXOX O
.. .
XOX XO XO XX
.. .
… … …
…
XOX XOX XOX TERMINAL OX OOX X
.. .
O XXO XOO Utility −1 0 +1
XXX
Chapter 6 5
Perfect play for deterministic, perfect-information games
Idea: choose move to position with highest minimax value = best achievable payoff against best play
E.g., 2-ply game:
MAX
3
MIN
322
A 11
A 12
A 13 A 21 A 22 A 23 A 31 A 32 A 33
Minimax
A1A2A3
3 12 8 2 4 6 14 5 2
Chapter 6 6
Minimax algorithm
function Minimax-Decision(state) returns an action inputs: state, current state in game
return the a in Actions(state) maximizing Min-Value(Result(a, state))
function Max-Value(state) returns a utility value
if Terminal-Test(state) then return Utility(state) v←−∞
for a, s in Successors(state) do v ← Max(v, Min-Value(s)) return v
function Min-Value(state) returns a utility value
if Terminal-Test(state) then return Utility(state) v←∞
for a, s in Successors(state) do v ← Min(v, Max-Value(s)) return v
Chapter 6 7
Complete??
Properties of minimax
Chapter 6 8
Optimal??
Properties of minimax
Complete?? Only if tree is finite (chess has specific rules for this). NB a finite strategy can exist even in an infinite tree!
Chapter 6 9
Properties of minimax
Complete?? Yes, if tree is finite (chess has specific rules for this) Optimal?? Yes, against an optimal opponent. Otherwise??
Time complexity??
Chapter 6 10
Properties of minimax
Complete?? Yes, if tree is finite (chess has specific rules for this) Optimal?? Yes, against an optimal opponent. Otherwise??
Time complexity?? O(bm)
Space complexity??
Chapter 6 11
Properties of minimax
Complete?? Yes, if tree is finite (chess has specific rules for this) Optimal?? Yes, against an optimal opponent. Otherwise??
Time complexity?? O(bm)
Space complexity?? O(bm) (depth-first exploration)
For chess, b ≈ 35, m ≈ 100 for “reasonable” games ⇒ exact solution completely infeasible
But do we need to explore every path?
Chapter 6 12
MAX
α–β pruning example 3
MIN
3
3 12 8
Chapter 6 13
MAX
α–β pruning example 3
MIN
32
3 12 8 2
XX
Chapter 6 14
MAX
α–β pruning example 3
MIN
3 2 14
3 12 8 2
14
XX
Chapter 6 15
MAX
α–β pruning example 3
MIN
3 2 145
3 12 8 2
14 5
XX
Chapter 6 16
MAX
α–β pruning example 33
MIN
3 2 1452
3 12 8 2
14 5 2
XX
Chapter 6 17
Why is it called α–β? MAX
MIN
.. .. ..
MAX
MIN
V
α is the best value (to max) found so far off the current path If V is worse than α, max will avoid it ⇒ prune that branch Define β similarly for min
Chapter 6 18
The α–β algorithm function Alpha-Beta-Decision(state) returns an action
return the a in Actions(state) maximizing Min-Value(Result(a, state))
function Max-Value(state,α,β) returns a utility value inputs: state, current state in game
α,thevalueofthebestalternativefor maxalongthepathtostate β,thevalueofthebestalternativefor minalongthepathtostate
if Terminal-Test(state) then return Utility(state) v←−∞
for a, s in Successors(state) do
v ← Max(v, Min-Value(s, α, β)) ifv ≥ βthenreturnv
α ← Max(α, v)
return v
function Min-Value(state,α,β) returns a utility value same as Max-Value but with roles of α, β reversed
Chapter 6 19
Pruning does not affect final result
Properties of α–β
Good move ordering improves effectiveness of pruning
With “perfect ordering,” time complexity = O(bm/2) ⇒ doubles solvable depth
A simple example of the value of reasoning about which computations are relevant (a form of metareasoning)
Unfortunately, 3550 is still impossible!
Chapter 6 20
Resource limits
Standard approach:
• Use Cutoff-Test instead of Terminal-Test
e.g., depth limit (perhaps add quiescence search) • Use Eval instead of Utility
i.e., evaluation function that estimates desirability of position
Suppose we have 100 seconds, explore 104 nodes/second
⇒ 106 nodes per move ≈ 358/2
⇒ α–β reaches depth 8 ⇒ pretty good chess program
Chapter 6 21
Evaluation functions
Black to move White to move White slightly better Black winning
For chess, typically linear weighted sum of features Eval(s) = w1f1(s) + w2f2(s) + . . . + wnfn(s)
e.g., w1 = 9 with
f1(s) = (number of white queens) – (number of black queens), etc.
Chapter 6 22
MAX
MIN
1 2 1 20
Digression: Exact values don’t matter
12 24 120 20400
Behaviour is preserved under any monotonic transformation of Eval Only the order matters:
payoff in deterministic games acts as an ordinal utility function
Chapter 6 23
Deterministic games in practice
Checkers: Chinook ended 40-year-reign of human world champion Marion Tinsley in 1994. Used an endgame database defining perfect play for all positions involving 8 or fewer pieces on the board, a total of 443,748,401,247 positions.
Chess: Deep Blue defeated human world champion Gary Kasparov in a six- game match in 1997. Deep Blue searches 200 million positions per second, uses very sophisticated evaluation, and undisclosed methods for extending some lines of search up to 40 ply.
Othello: human champions refuse to compete against computers, who are too good.
Go: human champions refuse to compete against computers, who are too bad. In go, b > 300, so most programs use pattern knowledge bases to suggest plausible moves.
Chapter 6 24
Nondeterministic games: backgammon
0 123456 789101112
25 24 23 22 21 20 19 18 17 16 15 14 13
Chapter 6 25
Nondeterministic games in general
In nondeterministic games, chance introduced by dice, card-shuffling Simplified example with coin-flipping:
MAX
CHANCE
3
−1 0.5 0.5 0.5
MIN
0.5
2 4 0 −2
2 4 7 4 6 0 5 −2
Chapter 6 26
…
Algorithm for nondeterministic games
Expectiminimax gives perfect play
Just like Minimax, except we must also handle chance nodes:
if state is a Max node then
return the highest ExpectiMinimax-Value of Successors(state)
if state is a Min node then
return the lowest ExpectiMinimax-Value of Successors(state)
if state is a chance node then
return average of ExpectiMinimax-Value of Successors(state)
…
Chapter 6 27
Nondeterministic games in practice
Dice rolls increase b: 21 possible rolls with 2 dice Backgammon ≈ 20 legal moves (can be 6,000 with 1-1 roll)
depth4=20×(21×20)3 ≈1.2×109
As depth increases, probability of reaching a given node shrinks
⇒ value of lookahead is diminished
α–β pruning is much less effective
TDGammon uses depth-2 search + very good Eval ≈ world-champion level
Chapter 6 28
MAX
DICE
2.1 1.3 21 40.9 .9 .1 .9 .1 .9 .1 .9 .1
MIN
2 3 1 4 20 30 1 400
Digression: Exact values DO matter
2 2 3 3 1 1 4 4 20 20 30 30 1 1 400 400
Behaviour is preserved only by positive linear transformation of Eval Hence Eval should be proportional to the expected payoff
Chapter 6 29
Games of imperfect information
E.g., card games, where opponent’s initial cards are unknown
Typically we can calculate a probability for each possible deal
Seems just like having one big dice roll at the beginning of the game∗
Idea: compute the minimax value of each action in each deal,
then choose the action with highest expected value over all deals∗
Special case: if an action is optimal for all deals, it’s optimal.∗
GIB, current best bridge program, approximates this idea by
1) generating 100 deals consistent with bidding information 2) picking the action that wins most tricks on average
Chapter 6 30
Example
Four-card bridge/whist/hearts hand, Max to play first 66878667667667667
4293 4293942 324 3 4 3
Chapter 6 31
0
MIN 4 2 9 3 4 2 9 3 9 4 2 3 2 4 3
7 4 3
0
MAX 6 6 8 7 8 6 6 7 6 6 7 6 6 7 6 MIN 4 2 9 3 4 2 9 3 9 4 2 3 2 4 3
6 7 4 3
0
Example
Four-card bridge/whist/hearts hand, Max to play first
MAX 6 6 8 7 8 6 6 7 6 6 7 6 6 7 6 6
Chapter 6 32
Four-card bridge/whist/hearts hand, Max to play first
MAX6687 MIN 4293
8 66 7 429 3
66 7 9 42 3
6 6 2 4
7 6 67 3 43
0
MAX6687 MIN 4293
8 66 7 429 3
66 7 9 42 3
6 6 2 4
7 6 67
MAX6687 MIN 4293
8 66 7 429 3
66 7 9 42 3
6 6 7
43
Example
2 4
3
67
−0.5
3
43
6
67
−0.5
643
Chapter 6 33
0
Commonsense example
Road A leads to a small heap of gold pieces Road B leads to a fork:
take the left fork and you’ll find a mound of jewels; take the right fork and you’ll be run over by a bus.
Chapter 6 34
Commonsense example
Road A leads to a small heap of gold pieces Road B leads to a fork:
take the left fork and you’ll find a mound of jewels; take the right fork and you’ll be run over by a bus.
Road A leads to a small heap of gold pieces Road B leads to a fork:
take the left fork and you’ll be run over by a bus; take the right fork and you’ll find a mound of jewels.
Chapter 6 35
Commonsense example
Road A leads to a small heap of gold pieces Road B leads to a fork:
take the left fork and you’ll find a mound of jewels; take the right fork and you’ll be run over by a bus.
Road A leads to a small heap of gold pieces Road B leads to a fork:
take the left fork and you’ll be run over by a bus; take the right fork and you’ll find a mound of jewels.
Road A leads to a small heap of gold pieces Road B leads to a fork:
guess correctly and you’ll find a mound of jewels; guess incorrectly and you’ll be run over by a bus.
Chapter 6 36
Proper analysis
* Intuition that the value of an action is the average of its values in all actual states is WRONG
With partial observability, value of an action depends on the information state or belief state the agent is in
Can generate and search a tree of information states
Leads to rational behaviors such as
♦ Acting to obtain information
♦ Signalling to one’s partner
♦ Acting randomly to minimize information disclosure
Chapter 6 37
Summary
Games are fun to work on! (and dangerous)
They illustrate several important points about AI
♦ perfection is unattainable ⇒ must approximate
♦ good idea to think about what to think about
♦ uncertainty constrains the assignment of values to states
♦ optimal decisions depend on information state, not real state Games are to AI as grand prix racing is to automobile design
Chapter 6 38