arXiv:cs/9901001v1 [cs.LG] 5 Jan 1999
Copyright By PowCoder代写 加微信 powcoder
TDLeaf(λ): Combining Temporal Difference Learning with Game-Tree
Department of Systems Engineering Department of Computer Science
Australian National University Australian National University
Canberra 0200, Australia Canberra 0200, Australia
In this paper we present TDLeaf(λ), a variation on the TD(λ) algorithm that enables it to be used
in conjunction with minimax search. We present some experiments in both chess and backgammon
which demonstrate its utility and provide comparisons with TD(λ) and another less radical variant, TD-
directed(λ). In particular, our chess program, “KnightCap,” used TDLeaf(λ) to learn its evaluation
function while playing on the Free Internet Chess Server (FICS, fics.onenet.net). It improved
from a 1650 rating to a 2100 rating in just 308 games. We discuss some of the reasons for this success
and the relationship between our results and Tesauro’s results in backgammon.
1. Introduction
TD(λ), developed by Sutton [6], has its roots in the
learning algorithm of Samuel’s checkers program [4]. It
is an elegant algorithm for approximating the expected
long term future cost of a stochastic dynamical system
as a function of the current state. The mapping from
states to future cost is implemented by a parameterised
function approximator such as a neural network. The
parameters are updated online after each state transition,
or in batch updates after several state transitions. The
goal of the algorithm is to improve the cost estimates as
the number of observed state transitions and associated
costs increases.
Tesauro’s TD-Gammon is perhaps the most remark-
able success of TD(λ). It is a neural network backgam-
mon player that has proven itself to be competitive with
the best human backgammon players [8].
Many authors have discussed the peculiarities of
backgammon that make it particularly suitable for Tem-
poral Difference learning with self-play [7, 5, 3]. Princi-
ple among these are speed of play: TD-Gammon learnt
from several hundred thousand games of self-play, rep-
resentation smoothness: the evaluation of a backgam-
mon position is a reasonably smooth function of the
position (viewed, say, as a vector of piece counts), mak-
ing it easier to find a good neural network approxima-
tion, and stochasticity: backgammon is a random game
which forces at least a minimal amount of exploration
of search space.
As TD-Gammon in its original form only searched
one-ply ahead, we feel this list should be appended
with: shallow search is good enough against hu-
mans. There are two possible reasons for this; ei-
ther one does not gain a lot by searching deeper in
backgammon (questionable given that recent versions
of TD-Gammon search to three-ply for a significant
performance improvement), or humans are incapable of
searching deeply and so TD-Gammon is only compet-
ing in a pool of shallow searchers.
In contrast, finding a representation for chess, othello
or Go which allows a small neural network to order
moves at one-ply with near human performance is a
far more difficult task [9, 11, 5]. For these games, re-
liable tactical evaluation is difficult to achieve without
deep search. This requires an exponential increase in
the number of positions evaluated as the search depth
increases. Consequently, the computational cost of the
evaluation function has to be low and hence, most chess
and othello programs use linear functions.
In the next section we look at reinforcement learning
(the broad category into which TD(λ) falls), and then in
subsequent sections we look at TD(λ) in some detail and
introduce two variations on the theme: TD-directed(λ)
and TDLeaf(λ). The first uses minimax search to gen-
erate better training data, and the second, TDLeaf(λ),
is used to learn an evaluation function for use in deep
minimax search.
2. Reinforcement Learning
The popularly known and best understood learning tech-
niques fall into the category of supervised learning.
This category is distinguished by the fact that for each
input upon which the system is trained, the “correct”
output is known. This allows us to measure the error
and use it to train the system.
For example, if our system maps input Xi to output
Y ′i , then, with Yi as the “correct” output, we can use
(Y ′i − Yi)
2 as a measure of the error corresponding to
http://arXiv.org/abs/cs/9901001v1
Xi. Summing this value across a set of training exam-
ples yields an error measure of the form
(Y ′i − Yi)
which can be used by training techniques such as back
propagation.
Reinforcement learning differs substantially from su-
pervised learning in that the “correct” output is not
known. Hence, there is no direct measure of error,
instead a scalar reward is given for the responses to a
series of inputs.
Consider an agent reacting to its environment (a gen-
eralisation of the two-player game scenario). Let S de-
note the set of all possible environment states. Time
proceeds with the agent performing actions at discrete
time steps t = 1, 2, … . At time t the agent finds the
environment in state xt ∈ S, and has available a set of
actions Axt . The agent chooses an action at ∈ Axt ,
which takes the environment to state xt+1 with proba-
bility p(xt, xt+1, at). After a determined series of ac-
tions in the environment, perhaps when a goal has been
achieved or has become impossible, the scalar reward,
r(xN ) where N is the number of actions in the series, is
awarded to the agent. These rewards are often discrete,
eg: “1” for success, “-1” for failure, and “0” otherwise.
For ease of notation we will assume all series of ac-
tions have a fixed length of N (this is not essential). If
we assume that the agent chooses its actions according
to some function a(x) of the current state x (so that
a(x) ∈ Ax), the expected reward from each state x ∈ S
is given by
∗(x) := ExN |xr(xN ), (1)
where the expectation is with respect to the transition
probabilities p(xt, xt+1, a(xt)).
Once we have J∗(u), we can ensure that actions are
chosen optimally in any state by using the following
equation to minimise the expected reward for the en-
vironment ie: the other player in the game.
∗(x) := argmina∈AxJ
∗(x′a, w). (2)
For very large state spaces S it is not possible store
the value of J∗(x) for every x ∈ S, so instead we
might try to approximate J∗ using a parameterised func-
tion class J̃ : S × Rk → R, for example linear func-
tion, splines, neural networks, etc. J̃(·, w) is assumed
to be a differentiable function of its parameters w =
(w1, . . . , wk). The aim is to find w so that J̃(x, w) is
“close to” J∗(u), at least in so far as it generates the
correct ordering of moves.
This approach to learning is quite different from that
of supervised learning where the aim is to minimise an
explicit error measurement for each data point.
Another significant difference between the two
paradigms is the nature of the data used in training.
With supervised learning it is fixed, whilst with rein-
forcement learning the states which occur during train-
ing are dependent upon the agent’s choice of action, and
thus on the training algorithm which is modifying the
agent. This dependency complicates the task of proving
convergence for TD(λ) in the general case [2].
3. The TD(λ) algorithm
Temporal Difference learning or TD(λ), is perhaps the
best known of the reinforcement learning algorithms. It
provides a way of using the scalar rewards such that
existing supervised training techniques can be used to
tune the function approximator. Tesauro’s TD-Gammon
for example, uses back propagation to train a neural net-
work function approximator, with TD(λ) managing this
process and calculating the necessary error values.
Here we consider how TD(λ) would be used to train
an agent playing a two-player game, such as chess or
backgammon.
Suppose x1, . . . , xN−1, xN is a sequence of states in
one game. For a given parameter vector w, define the
temporal difference associated with the transition xt →
dt := J̃(xt+1, w) − J̃(xt, w). (3)
Note that dt measures the difference between the reward
predicted by J̃(·, w) at time t + 1, and the reward pre-
dicted by J̃(·, w) at time t. The true evaluation function
J∗ has the property
Ext+1|xt [J
∗(xt+1) − J
∗(xt)] = 0,
so if J̃(·, w) is a good approximation to J∗, Ext+1|xtdt
should be close to zero. For ease of notation we will
assume that J̃(xN , w) = r(xN ) always, so that the final
temporal difference satisfies
dN−1 = J̃(xN , w)−J̃(xN−1, w) = r(xN )−J̃(xN−1, w).
That is, dN−1 is the difference between the true out-
come of the game and the prediction at the penultimate
At the end of the game, the TD(λ) algorithm updates
the parameter vector w according to the formula
w := w + α
∇J̃(xt, w)
where ∇J̃(·, w) is the vector of partial derivatives of J̃
with respect to its parameters. The positive parameter
α controls the learning rate and would typically be “an-
nealed” towards zero during the course of a long series
of games. The parameter λ ∈ [0, 1] controls the extent
to which temporal differences propagate backwards in
time. To see this, compare equation (4) for λ = 0:
∇J̃(xt, w)dt
∇J̃(xt, w)
J̃(xt+1, w) − J̃(xt, w)
and λ = 1:
w := w + α
∇J̃(xt, w)
r(xN ) − J̃(xt, w)
Consider each term contributing to the sums in equa-
tions (5) and (6). For λ = 0 the parameter vector is
being adjusted in such a way as to move J̃(xt, w) —
the predicted reward at time t — closer to J̃(xt+1, w)
— the predicted reward at time t+1. In contrast, TD(1)
adjusts the parameter vector in such away as to move
the predicted reward at time step t closer to the final re-
ward at time step N . Values of λ between zero and one
interpolate between these two behaviours. Note that (6)
is equivalent to gradient descent on the error function
r(xN ) − J̃(xt, w)
Tesauro [7, 8] and those who have replicated his work
with backgammon, report that the results are insensitive
to the value of λ and commonly use a value around 0.7.
Recent work by Beale and Smith [1] however, suggests
that in the domain of chess there is greater sensitivity
to the value of λ, with it perhaps being profitable to
dynamically tune λ.
Successive parameter updates according to the TD(λ)
algorithm should, over time, lead to improved predic-
tions of the expected reward J̃(·, w). Provided the ac-
tions a(xt) are independent of the parameter vector w,
it can be shown that for linear J̃(·, w), the TD(λ) algo-
rithm converges to a near-optimal parameter vector [10].
Unfortunately, there is no such guarantee if J̃(·, w) is
non-linear [10], or if a(xt) depends on w [2].
4. Two New Variants
For argument’s sake, assume any action a taken in state
x leads to predetermined state which we will denote
by x′a. Once an approximation J̃(·, w) to J
∗ has been
found, we can use it to choose actions in state x by
picking the action a ∈ Ax whose successor state x
minimizes the opponent’s expected reward1:
ã(x) := argmina∈ ̃(x
a, w). (7)
This was the strategy used in TD-Gammon. Unfortu-
nately, for games like othello and chess it is difficult to
accurately evaluate a position by looking only one move
or ply ahead. Most programs for these games employ
some form of minimax search. In minimax search, one
builds a tree from position x by examining all possible
moves for the computer in that position, then all possi-
ble moves for the opponent, and then all possible moves
for the computer and so on to some predetermined depth
d. The leaf nodes of the tree are then evaluated using
a heuristic evaluation function (such as J̃(·, w)), and
1If successor states are only determined stochastically by the
choice of a, we would choose the action minimizing the expected
reward over the choice of successor states.
the resulting scores are propagated back up the tree by
choosing at each stage the move which leads to the best
position for the player on the move. See figure 1 for
an example game tree and its minimax evaluation. With
reference to the figure, note that the evaluation assigned
to the root node is the evaluation of the leaf node of the
principal variation; the sequence of moves taken from
the root to the leaf if each side chooses the best available
Our TD-directed(λ) variant utilises minimax search
by allowing play to be guided by minimax, but still de-
fines the temporal differences to be the differences in
the evaluations of successive board positions occurring
during the game, as per equation (3).
Let J̃d(x, w) denote the evaluation obtained for state
x by applying J̃(·, w) to the leaf nodes of a depth d
minimax search from x. Our aim is to find a parameter
vector w such that J̃d(·, w) is a good approximation to
the expected reward J∗. One way to achieve this is to
apply the TD(λ) algorithm to J̃d(x, w). That is, for each
sequence of positions x1, . . . , xN in a game we define
the temporal differences
dt := J̃d(xt+1, w) − J̃d(xt, w) (8)
as per equation (3), and then the TD(λ) algorithm (4)
for updating the parameter vector w becomes
w := w + α
∇J̃d(xt, w)
One problem with equation (9) is that for d > 1,
J̃d(x, w) is not a necessarily a differentiable function
of w for all values of w, even if J̃(·, w) is everywhere
differentiable. This is because for some values of w
there will be “ties” in the minimax search, i.e. there
will be more than one best move available in some of
the positions along the principal variation, which means
that the principal variation will not be unique. Thus, the
evaluation assigned to the root node, J̃d(x, w), will be
the evaluation of any one of a number of leaf nodes.
Fortunately, under some mild technical assumptions
on the behaviour of J̃(x, w), it can be shown that for
all states x and for “almost all” w ∈ Rk, J̃d(x, w)
is a differentiable function of w. Note that J̃d(x, w)
is also a continuous function of w whenever J̃(x, w)
is a continuous function of w. This implies that even
for the “bad” pairs (x, w), ∇J̃d(x, w) is only undefined
because it is multi-valued. Thus we can still arbitrarily
choose a particular value for ∇J̃d(x, w) if w happens to
land on one of the bad points.
Based on these observations we modified the TD(λ)
algorithm to take account of minimax search: instead of
working with the root positions x1, . . . , xN , the TD(λ)
algorithm is applied to the leaf positions found by min-
imax search from the root positions. We call this algo-
rithm TDLeaf(λ).
Fig. 1: Full breadth, 3-ply search tree illustrating the minimax rule for propagating values. Each of the leaf nodes (H–O) is given a score by the
evaluation function, J̃(·, w). These scores are then propagated back up the tree by assigning to each opponent’s internal node the minimum
of its children’s values, and to each of our internal nodes the maximum of its children’s values. The principle variation is then the sequence of
best moves for either side starting from the root node, and this is illustrated by a dashed line in the figure. Note that the score at the root node
A is the evaluation of the leaf node (L) of the principal variation. As there are no ties between any siblings, the derivative of A’s score with
respect to the parameters w is just ∇J̃(L, w).
5. Experiments with Chess
In this section we describe several experiments in which
the TDLeaf(λ) and TD-directed(λ) algorithms were
used to train the weights of a linear evaluation function
for our chess program, called KnightCap.
For our main experiment we took KnightCap’s eval-
uation function and set all but the material parameters
to zero. The material parameters were initialised to the
standard “computer” values2. With these parameter set-
tings KnightCap was started on the Free Internet Chess
server (FICS, fics.onenet.net). To establish its
rating, 25 games were played without modifying the
evaluation function, after which it had a blitz (fast time
control) rating of 1650 ± 503. We then turned on the
TDLeaf(λ) learning algorithm, with λ = 0.7 and the
learning rate α = 1.0. The value of λ was chosen
arbitrarily, while α was set high enough to ensure rapid
modification of the parameters.
After only 308 games, KnightCap’s rating climbed to
2110 ± 50. This rating puts KnightCap at the level of
US Master.
We repeated the experiment using TD-directed(λ),
and observed a 200 point rating rise over 300 games.
A significant improvement, but slower than TDLeaf(λ).
There are a number of reasons for KnightCap’s re-
markable rate of improvement.
1. KnightCap started out with intelligent material pa-
rameters. This put it close in parameter space to
many far superior parameter settings.
2. Most players on FICS prefer to play opponents
of similar strength, and so KnightCap’s opponents
improved as it did. Hence it received both positive
and negative feedback from its games.
3. KnightCap was not learning by self-play.
21 for a pawn, 4 for a knight, 4 for a bishop, 6 for a rook and 12
for a queen.
3After some experimentation, we have estimated the standard de-
viation of FICS ratings to be 50 ratings points.
To investigate the importance of some of these
reasons, we conducted several more experiments.
Good initial conditions.
A second experiment was run in which KnightCap’s co-
efficients were all initialised to the value of a pawn.
Playing with this initial weight setting KnightCap
had a blitz rating of 1260 ± 50. After more than 1000
games on FICS KnightCap’s rating has improved to
about 1540 ± 50, a 280 point gain. This is a much
slower improvement than the original experiment, and
makes it clear that starting near a good set of weights is
important for fast convergence.
Learning by self-play was extremely effective for TD-
Gammon, but a significant reason for this is the stochas-
ticity of backgammon. However, chess is a determin-
istic game and self-play by a deterministic algorithm
tends to result in a large number of substantially similar
games. This is not a problem if the games seen in self-
play are “representative” of the games played in prac-
tice, however KnightCap’s self-play games with only
non-zero material weights are very different to the kind
of games humans of the same level would play.
To demonstrate that learning by self-play for Knight-
Cap is not as effective as learning against real oppo-
nents, we ran another experiment in which all but the
material parameters were initialised to zero again, but
this time KnightCap learnt by playing against itself. Af-
ter 600 games (twice as many as in the original FICS
experiment), we played the resulting version against the
good version that learnt on FICS, in a 100 game match
with the weight values fixed. The FICS trained version
won 89 points to the self-play version’s 11.
6. Backgammon Experiment
For our backgammon experiment we were fortunate to
have (University of California, San Diego)
provide us with the source code for his LGammon pro-
gram which has been implemented along the lines of
Tesauro’s TD-Gammon[7, 8].
Along with the code for LGammon, Land also pro-
vided a set of weights for the neural network. The
weights were used by LGammon when playing on the
First Internet Backgammon Server (FIBS, fibs.com),
where LGammon achieved a rating which ranged from
1600 to 1680, significantly above the mean rating across
all players of about 1500. For convenience, we refer to
the weights as the FIBS weights.
Using LGammon and the FIBS weights to directly
compare searching to two-ply against searching to one-
ply, we observed that two-ply is stronger by 0.25 points-
per-game, a significant difference in backgammon. Fur-
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com