CSC 311: Introduction to Machine Learning
Lecture 8 – Reinforcement Learning
University of Toronto
Intro ML (UofT) CSC311-Lec8 1 / 46
Reinforcement Learning Problem
In supervised learning, the problem is to predict an output t given an input x.
But often the ultimate goal is not to predict, but to make decisions, i.e., take actions.
In many cases, we want to take a sequence of actions, each of which affects the future possibilities, i.e., the actions have long-term
consequences.
learning-based approaches.
An agent observes the takes an action and with the goal of world its states changes achieving long-term
Reinforcement Learning Problem: An agent continually interacts with an environment. How should it choose its actions so that its long-term rewards
Reinforcement Learning Problem: An agent continually interacts with the
are maximizeedn?vironment. How should it choose its actions so that its long-term rewards are
maximized?
Intro ML (UofT) CSC311-Lec8 2 / 46
Also might be called:
Playing Games: Atari
Intro ML (UofT) CSC311-Lec8 3 / 46
Playing Games: Super Mario
Intro ML (UofT) CSC311-Lec8 4 / 46
Making Pancakes!
Intro ML (UofT) CSC311-Lec8 5 / 46
Reinforcement Learning
Learning problems differ in the information available to the learner:
Supervised: For a given input, we know its corresponding output, e.g., class label.
Unsupervised: We only have input data. We somehow need to organize them in a meaningful way, e.g., clustering.
Reinforcement learning: We observe inputs, and we have to choose outputs (actions) in order to maximize rewards. Correct outputs are not provided.
In RL, we face the following challenges:
Continuous stream of input information, and we have to choose actions
Effects of an action depend on the state of the agent in the world
Obtain reward that depends on the state and actions
You know the reward for your action, not other possible actions.
Could be a delay between action and reward.
Intro ML (UofT) CSC311-Lec8
Reinforcement Learning
Intro ML (UofT) CSC311-Lec8 7 / 46
Example: Tic Tac Toe, Notation
Intro ML (UofT) CSC311-Lec8 8 / 46
Example: Tic Tac Toe, Notation
Intro ML (UofT) CSC311-Lec8 9 / 46
Example: Tic Tac Toe, Notation
Intro ML (UofT) CSC311-Lec8 10 / 46
Example: Tic Tac Toe, Notation
Intro ML (UofT) CSC311-Lec8 11 / 46
Formalizing Reinforcement Learning Problems
Markov Decision Process (MDP) is the mathematical framework to describe RL problems.
A discounted MDP is defined by a tuple (S, A, P, R, γ).
S : State space. Discrete or continuous
A: Action space. Here we consider finite action space, i.e.,
A = {a1,…,aM}.
P : Transition probability
R: Immediate reward distribution
γ: Discount factor (0 ≤ γ ≤ 1)
Let us take a closer look at each of them.
Intro ML (UofT) CSC311-Lec8
Formalizing Reinforcement Learning Problems
The agent has a state s ∈ S in the environment, e.g., the location of X and O in tic-tac-toc, or the location of a robot in a room.
At every time step t = 0,1,…, the agent is at state St.
Takes an action At
Moves into a new state St+1, according to the dynamics of the
environment and the selected action, i.e., St+1 ∼ P(·|St, At)
Receives some reward Rt ∼ R(·|St , At , St+1 )
Alternatively, it can be Rt ∼ R(·|St, At) or Rt ∼ R(·|St)
Intro ML (UofT) CSC311-Lec8
Formulating Reinforcement Learning
The action selection mechanism is described by a policy π
Policy π is a mapping from states to actions, i.e., At = π(St)
(deterministic policy) or At ∼ π(·|St) (stochastic policy).
The goal is to find a policy π such that long-term rewards of the agent is
maximized.
Different notions of the long-term reward:
Cumulative/total reward: R0 + R1 + R2 + · · · 2
Discounted (cumulative) reward: R0 + γR1 + γ R2 + · · ·
The discount factor 0 ≤ γ ≤ 1 determines how myopic or farsighted the agent is.
When γ is closer to 0, the agent prefers to obtain reward as soon as possible.
When γ is close to 1, the agent is willing to receive rewards in the farther future.
The discount factor γ has a financial interpretation: If a dollar next year is worth almost the same as a dollar today, γ is close to 1. If a dollar’s worth next year is much less its worth today, γ is close to 0.
(UofT) CSC311-Lec8 14 / 46
Transition Probability (or Dynamics)
The transition probability describes the changes in the state of the agent when it chooses actions
P(St+1 =s′|St =s,At =a)
This model has Markov property: the future depends on the past only
through the current state
Intro ML (UofT) CSC311-Lec8 15 / 46
Value Function
Value function is the expected future reward, and is used to evaluate the desirability of states.
State-value function V π (or simply value function) for policy π is a function defined as
Vπ(s)Eπ γtRt |S0 =s where Rt ∼R(·|St,At,St+1). t≥0
It describes the expected discounted reward if the agent starts from state s and follows policy π.
Action-value function Qπ for policy π is
Qπ(s,a)Eπ γtRt |S0 =s,A0 =a .
It describes the expected discounted reward if the agent starts from state
s, takes action a, and afterwards follows policy π.
Intro ML (UofT) CSC311-Lec8 16 / 46
Value Function
The goal is to find a policy π that maximizes the value function. Optimal value function:
Q∗(s, a) = sup Qπ(s, a) π
Given Q∗, the optimal policy can be obtained as π∗(s) = argmax Q∗(s, a)
The goal of an RL agent is to find a policy π that is close to optimal, i.e., Qπ ≈Q∗.
Intro ML (UofT) CSC311-Lec8 17 / 46
Example: Tic-Tac-Toe
Consider the game tic-tac-toe:
State: Positions of X’s and O’s on the board
Action: The location of the new X or O.
based on rules of game: choice of one open position
Policy: mapping from states to actions
Reward: win/lose/tie the game (+1/ − 1/0) [only at final move in
given game]
Value function: Prediction of reward in future, based on current
In tic-tac-toe, since state space is tractable, we can use a table to represent value function
Intro ML (UofT) CSC311-Lec8
Recall that reward at time t is a random variable sampled from Rt ∼ R(·|St, At, St+1). In the sequel, we will assume that reward depends only on the current state and the action Rt ∼ R(·|St, At). WecanwriteitasRt =R(St,At),andr(s,a)=E[R(s,a)].
The value function satisfies the following recursive relationship:
∞ Qπ(s,a) = E γtRt|S0 = s,A0 = a
=E R(S0,A0)+γγtRt+1|S0 =s,A0 =a t=0
=E[R(S0,A0)+γQπ(S1,π(S1))|S0 =s,A0 =a]
= r(s, a) + γ
P(s′|s, a)Qπ(s′, π(s′))ds′ CSC311-Lec8
The value function satisfies
Qπ(s, a) = r(s, a) + γ
This is called the Bellman equation for policy π.
We also define the Bellman operator as a mapping that takes any value function Q (not necessarily Qπ) and returns another value function as follows:
(TπQ)(s,a) r(s,a)+γ
The Bellman equation can succinctly be written as
∀(s,a) ∈ S ×A.
P(s′|s,a)Q(s′,π(s′))ds′, Qπ =TπQπ.
P(s′|s, a)Qπ(s′, π(s′))ds′.
Intro ML (UofT) CSC311-Lec8
We also define the Bellman optimality operator T∗ as a mapping that takes any value function Q and returns another value function as follows:
optimal value function:
P(s′|s, a) maxQ(s′, a′)ds′, ∀(s, a) ∈ S × A. a′ ∈A
(T ∗Q)(s, a) r(s, a) + γ
We may also define the Bellman optimality equation, whose solution is the
or less compactly
Q∗(s, a) = r(s, a) + γ
Note that if we solve the Bellman (optimality) equation, we find the value function Qπ (or Q∗).
P(s′|s, a) maxQ∗(s′, a′)ds′. a′ ∈A
Intro ML (UofT) CSC311-Lec8 21 / 46
Key observations:
Q∗ = T∗Q∗ (exercise)
The solution of these fixed-point equations are unique. Value-based approaches try to find a Qˆ such that
Qˆ ≈ T ∗ Qˆ
The greedy policy of Qˆ is close to the optimal policy:
Q π ( s ; Qˆ ) ≈ Q π ∗ = Q ∗ where the greedy policy of Qˆ is defined as
π(s; Qˆ) = argmax Qˆ(s, a) a∈A
Intro ML (UofT) CSC311-Lec8 22 / 46
Finding the Value Function
Let us first study the policy evaluation problem: Given a policy π, find Qπ (orVπ).
Policy evaluation is an intermediate step for many RL methods.
The uniqueness of the fixed-point of the Bellman operator implies that if
we find a Q such that TπQ = Q, then Q = Qπ.
For now assume that P and r(s, a) = E [R(·|s, a)] are known.
If the state-action space S × A is finite (and not very large, i.e., hundreds or thousands, but not millions or billions), we can solve the following Linear System of Equations over Q:
Q(s, a) = r(s, a) + γ P(s′|s, a)Q(s′, π(s′)) ∀(s, a) ∈ S × A s′ ∈S
This is feasible for small problems (|S × A| is not too large), but for large problems there are better approaches.
Intro ML (UofT) CSC311-Lec8 23 / 46
Finding the Optimal Value Function
The Bellman optimality operator also has a unique fixed point. If we find a Q such that T∗Q = Q, then Q = Q∗.
Let us try an approach similar to what we did for the policy evaluation problem.
If the state-action space S × A is finite (and not very large), we can solve the following Nonlinear System of Equation:
Q(s, a) = r(s, a) + γ P(s′|s, a) max Q(s′, a′) ∀(s, a) ∈ S × A
This is a nonlinear system of equations, and can be difficult to solve.
s′ ∈S Can we do anything else?
Intro ML (UofT) CSC311-Lec8 24 / 46
Finding the Optimal Value Function: Value Iteration
Assume that we know the model P and R. How can we find the optimal value function?
Finding the optimal policy/value function when the model is known is sometimes called the planning problem.
We can benefit from the Bellman optimality equation and use a method called Value Iteration: Start from an initial function Q1. For each
k = 1,2,…, apply
Qk+1 ←T∗Qk Q∗
Qk+1 ←T∗Qk Bellman operator T ∗
max Qk(s′, a′)P(s′|s, a)ds′ S a′∈A
Qk+1(s, a) ← r(s, a) + γ
Qk+1(s, a) ← r(s, a) + γ max Qk(s′, a′)P(s′|s, a)
s′∈S a′∈A (UofT) CSC311-Lec8
Value Iteration
The Value Iteration converges to the optimal value function.
This is because of the contraction property of the Bellman (optimality)
operator, i.e., ∥T ∗Q1 − T ∗Q2∥∞ ≤ γ ∥Q1 − Q2∥∞.
Qk+1 ←T∗Qk
Intro ML (UofT) CSC311-Lec8 26 / 46
is Contraction (Optional)
T⇤ (or T⇡) T⇤Q1
Q2 T⇤Q2
(T∗Q1)(s,a)−(T∗Q2)(s,a)= r(s,a)+γ P(s′|s,a) max Q1(s′,a′)ds′ − S a′∈A
′ ′ ′ ′
r(s,a) + γ P(s |s,a) max Q2(s ,a )ds S a′∈A
=γ P(s′|s, a) max Q1(s′, a′) − max Q2(s′, a′)ds′ S a′∈A a′∈A
′ ′′ ′′′ ≤γ P(s |s,a) max Q1(s ,a ) − Q2(s ,a )ds
Q1(s′, a′) − Q2(s′, a′) CSC311-Lec8
P(s′|s, a)ds′
(s′ ,a′ )∈S ×A
Intro ML (UofT)
is Contraction (Optional)
Therefore, we get that
sup |(T ∗Q1)(s, a) − (T ∗Q2)(s, a)| ≤ γ sup |Q1(s, a) − Q2(s, a)| .
(s,a)∈S ×A
Or more succinctly,
(s,a)∈S ×A
∥T∗Q1 −T∗Q2∥∞ ≤γ∥Q1 −Q2∥∞.
We also have a similar result for the Bellman operator of a policy π:
∥TπQ1 − TπQ2∥∞ ≤ γ ∥Q1 − Q2∥∞ .
Intro ML (UofT) CSC311-Lec8 28 / 46
Challenges
When we have a large state space (e.g., when S ⊂ Rd or |S × A| is very large):
Exact representation of the value (Q) function is infeasible for all (s, a) ∈ S × A.
The exact integration in the Bellman operator is challenging
Qk+1(s, a) ← r(s, a) + γ max Qk(s′, a′)P(s′|s, a)ds′ S a′∈A
We often do not know the dynamics P and the reward function R, so we cannot calculate the Bellman operators.
Intro ML (UofT) CSC311-Lec8 29 / 46
Is There Any Hope?
During this course, we saw many methods to learn functions (e.g., classifier, regressor) when the input is continuous-valued and we are only given a finite number of data points.
We may adopt those techniques to solve RL problems.
There are some other aspects of the RL problem that we do not touch in this course; we briefly mention them later.
Intro ML (UofT) CSC311-Lec8 30 / 46
Batch RL and Approximate Dynamic Programming
Suppose that we are given the following dataset
DN ={(Si,Ai,Ri,Si′)}Ni=1
(Si, Ai) ∼ ν (ν is a distribution over S × A) Si′ ∼P(·|Si,Ai)
Ri ∼ R(·|Si, Ai)
Can we estimate Q ≈ Q∗ using these data? Intro ML (UofT) CSC311-Lec8
From Value Iteration to Approximate Value Iteration
Recall that each iteration of VI computes
Qk+1 ←T∗Qk
We cannot directly compute T∗Qk. But we can use data to
approximately perform one step of VI.
Consider (Si, Ai, Ri, Si′) from the dataset DN .
Consider a function Q : S ×A → R.
We can define a random variable ti = Ri + γ maxa′ ∈A Q(Si′ , a′ ). Notice that
E[ti|Si,Ai]=E Ri+γmaxQ(Si′,a′)|Si,Ai a′ ∈A
maxQ(s′,a′)P(s′|Si,Ai)ds′ =(T∗Q)(Si,Ai) a′ ∈A
=r(Si,Ai)+γ
Soti =Ri+γmaxa′∈AQ(Si′,a′)isanoisyversionof(T∗Q)(Si,Ai).
Fitting a function to noisy real-valued data is the regression problem.
Intro ML (UofT) CSC311-Lec8 32 / 46
From Value Iteration to Approximate Value Iteration
Qk+1 ←T∗Qk
Bellman operator T ∗ Qk
Given the dataset DN = {(Si, Ai, Ri, Si′)}Ni=1 and an action-value function estimate Qk, we can construct the dataset {(x(i),t(i))}Ni=1 with x(i) = (Si, Ai) and t(i) = Ri + γ maxa′∈A Qk(Si′, a′).
Because of E [Ri + γ maxa′ ∈A Qk (Si′ , a′ )|Si , Ai ] = (T ∗ Qk )(Si , Ai ), we can treat the problem of estimating Qk+1 as a regression problem with noisy data.
Intro ML (UofT) CSC311-Lec8 33 / 46
From Value Iteration to Approximate Value Iteration
Qk+1 ←T∗Qk
Bellman operator T ∗ Qk
Given the dataset DN = {(Si, Ai, Ri, Si′)}Ni=1 and an action-value function estimate Qk, we solve a regression problem. We minimize the
squarederror:
Qk+1 ← argmin
We run this procedure K-times.
1 N Q(Si, Ai) −
2 Ri + γ max Qk(Si′, a)
The policy of the agent is selected to be the greedy policy w.r.t. the final estimate of the value function: At state s ∈ S, the agent chooses
π(s; QK ) ← argmaxa∈A QK (s, a)
This method is called Approximate Value Iteration or Fitted Value Iteration.
Intro ML (UofT) CSC311-Lec8 34 / 46
Choice of Estimator
We have many choices for the regression method (and the function space F): Linear models: F = {Q(s, a) = w⊤ψ(s, a)}.
How to choose the feature mapping ψ?
Decision Trees, Random Forest, etc.
Kernel-based methods, and regularized variants.
(Deep) Neural Networks. Deep Q Network (DQN) is an example of performing AVI with DNN, with some DNN-specific tweaks.
Intro ML (UofT) CSC311-Lec8
Some Remarks on AVI
AVI converts a value function estimation problem to a sequence of regression problems.
As opposed to the conventional regression problem, the target of AVI, which is T∗Qk, changes at each iteration.
Usually we cannot guarantee that the solution of the regression problem Qk+1 is exactly equal to T∗Qk. We may only have Qk+1 ≈ T∗Qk.
These errors might accumulate and may even cause divergence.
Intro ML (UofT) CSC311-Lec8 36 / 46
From Batch RL to Online RL
We started from the setting where the model was known (Planning) to the setting where we do not know the model, but we have a batch of data coming from the previous interaction of the agent with the environment (Batch RL).
This allowed us to use tools from the supervised learning literature (particularly, regression) to design RL algorithms.
But RL problems are often interactive: the agent continually interacts with the environment, updates its knowledge of the world and its policy, with the goal of achieving as much rewards as possible.
Can we obtain an online algorithm for updating the value function?
An extra difficulty is that an RL agent should handle its interaction with the environment carefully: it should collect as much information about the environment as possible (exploration), while benefitting from the knowledge that has been gathered so far in order to obtain a lot of rewards (exploitation).
Intro ML (UofT) CSC311-Lec8 37 / 46
Suppose that agent continually interacts with the environment. This means that
At time step t, the agent observes the state variable St.
The agent chooses an action At according to its policy, i.e.,
At = πt(St).
The state of the agent in the environment changes according to the
dynamics. At time step t + 1, the state is St+1 ∼ P(·|St, At). The agent observes the reward variable too: Rt ∼ R(·|St, At).
Two questions:
Can we update the estimate of the action-value function Q online and only based on (St, At, Rt, St+1) such that it converges to the optimal value function Q∗?
What should the policy πt be so the algorithm explores the environment?
Q-Learning is an online algorithm that addresses these questions. We present Q-Learning for finite state-action problems.
Intro ML (UofT) CSC311-Lec8
Q-Learning with ε-Greedy Policy
Parameters:
Learning rate: 0 < α < 1: learning rate
Exploration parameter: ε
Initialize Q(s, a) for all (s, a) ∈ S × A The agent starts at state S0.
For time step t = 0, 1, ...,
Choose At according to the ε-greedy policy, i.e.,
argmaxa∈A Q(St , a) with probability 1 − ε
At ← Uniformly random action in A with probability ε
Take action At in the environment.
The state of the agent changes from St to St+1 ∼ P (·|St , At )
Observe St+1 and Rt
Update the action-value function at state-action (St,At):
Q(St,At)←Q(St,At)+α Rt +γmaxQ(St+1,a′)−Q(St,At) a′ ∈A
Intro ML (UofT) CSC311-Lec8 39 / 46
Exploration vs. Exploitation
The ε-greedy is a simple mechanism for maintaining exploration-exploitation tradeoff.
argmaxa∈A Q(S, a) with probability 1 − ε πε(S; Q) = Uniformly random action in A with probability ε
The ε-greedy policy ensures that most of the time (probability 1 − ε) the agent exploits its incomplete knowledge of the world by choosing the best action (i.e., corresponding to the highest action-value), but occasionally (probability ε) it explores other actions.
Without exploration, the agent may never find some good actions.
The ε-greedy is one of the simplest and widely used methods for trading-off exploration and exploitation. Exploration-exploitation tradeoff is an important topic of research.
Intro ML (UofT) CSC311-Lec8
Softmax Action Selection
We can also choose actions by sampling from probabilities derived from softmax function.
exp(βQ(S, a)) πβ(a|S;Q) = a′∈A exp(βQ(S,a′))
β → ∞ recovers greedy action selection.
Intro ML (UofT) CSC311-Lec8 41 / 46
Examples of Exploration-Exploitation in the Real World
Restaurant Selection
Exploitation: Go to your favourite restaurant
Exploration: Try a new restaurant Online Banner Advertisements
Exploitation: Show the most successful advertisement
Exploration: Show a different advertisement
Oil Drilling
Exploitation: Drill at the best known location
Exploration: Drill at a new location Game Playing
Exploitation: Play the move you believe is best
Exploration: Play an experimental move
[Slide credit: D. Silver]
Intro ML (UofT) CSC311-Lec8 42 / 46
An Intuition on Why Q-Learning Works? (Optional)
Consider a tuple (S, A, R, S′). The Q-learning update is
Q(S,A)←Q(S,A)+α R+γmaxQ(S′,a′)−Q(S,A) . a′ ∈A
To understand this better, let us focus on its stochastic equilibrium, i.e., where the expected change in Q(S, A) is zero. We have
E R+γmaxQ(S′,a′)−Q(S,A)|S,A =0 a′ ∈A
⇒(T∗Q)(S,A) = Q(S,A)
So at the stochastic equilibrium, we have (T ∗Q)(S, A) = Q(S, A). Because the fixed-point of the Bellman optimality operator is unique (and is Q∗), Q is the same as the optimal action-value function Q∗.
One can show that under certain conditions, Q-Learning indeed conv