代写代考 Ve492: Introduction to Artificial Intelligence

Ve492: Introduction to Artificial Intelligence
Reinforcement Learning
Exams, Grades
Paul M-SJTU Joint Ins5tute

Copyright By PowCoder代写 加微信 powcoder

Some slides adapted from h0p://ai.berkeley.edu

Example: Prescription Problem
P(cure) = 0.2
P(cure) = 0.4
P(cure) = 0.9
P(cure) = 0.1

Example: Prescription Problem
P(cure) = ?
P(cure) = ?
P(cure) = ?
P(cure) = ?

Let’s Play!
P(cure) = ? P(cure) = ? P(cure) = ?
P(cure) = ?

What Just Happened?
❖ That wasn’t planning, it was learning!
❖ Specifically, reinforcement learning
❖ There was an MDP, but you couldn’t solve it with just computation
❖ You needed to actually act to figure it out
❖ Important ideas in reinforcement learning that came up
❖ Exploration: you have to try unknown actions to get information
❖ Exploitation: eventually, you have to use what you know
❖ Regret: even if you learn intelligently, you make mistakes
❖ Sampling: because of chance, you have to try things repeatedly
❖ Difficulty: learning can be much harder than solving a known MDP

Reinforcement Learning
❖ Basic idea:
❖ Receive feedback in the form of rewards
Environment
❖ Agent’s utility is defined by the reward function
❖ Must (learn to) act so as to maximize expected rewards
❖ All learning is based on observed samples of outcomes!

Example: Learning to Walk

Example: Atari

Example: Robots

Video of Demo Crawler Bot

Reinforcement Learning
❖ Still assume a Markov decision process (MDP):
❖ AsetofstatessÎS
❖ A set of actions (per state) A
❖ A model T(s,a,s’)
❖ A reward function R(s,a,s’)
❖ Still looking for a policy p(s)
❖ New twist: don’t know T or R
❖ i.e. we don’t know which states are good or what the actions do ❖ Mustactuallytryactionsandstatesouttolearn

Offline (MDPs) vs. Online (RL)
Offline Online Solu+on Learning

How to Solve RL Problems?
❖ In online RL, the agent interacts with the world ❖ It learns from transitions 𝑠! , 𝑎! , 𝑟! , 𝑠!”#
❖ Model-based learning
❖ Model-free learning
❖ Policy evaluation
❖ Policy learning

Model-Based Learning
❖ Model-Based Idea:
❖ Learn an approximate model based on experiences
❖ Solve for values as if the learned model were correct
❖ Step 1: Learn empirical MDP model
❖ Count outcomes s’ for each s, a ̂ ”
❖ Normalize to give an estimate 𝑇(𝑠, 𝑎, 𝑠 )
❖ Discover each 𝑅̂ (𝑠, 𝑎, 𝑠”) when we experience (s, a, s’)
❖ Step 2: Solve the learned MDP
❖ For example, use value iteration, as before

Example: Model-Based Learning
Input Policy p
Observed Episodes (Training)
Learned Model
T(B, east, C) = 1.00 T(C, east, D) = 0.75 T(C, east, A) = 0.25
R(B, east, C) = -1 R(C, east, D) = -1 R(D, exit, x) = +10
Assume: g = 1
B, east, C, -1 C, east, D, -1 D, exit, x, +10
E, north, C, -1 C, east, D, -1 D, exit, x, +10
B, east, C, -1 C, east, D, -1 D, exit, x, +10
E, north, C, -1 C, east, A, -1 A, exit, x, -10

How to Solve RL Problems?
❖ In online RL, the agent interacts with the world ❖ It learns from transitions 𝑠! , 𝑎! , 𝑟! , 𝑠!”#
❖ Model-based learning
❖ Model-free learning
❖ Policy evaluation
❖ Policy learning

Passive Reinforcement Learning
❖ Simplified task: policy evaluation
❖ Input: a fixed policy p(s)
❖ Y ou don’t know the transitions T(s,a,s’)
❖ Youdon’tknowtherewardsR(s,a,s’)
❖ Goal:learnthestatevalues
❖ In this case:
❖ Learneris“alongfortheride”
❖ Nochoiceaboutwhatactionstotake
❖ Justexecutethepolicyandlearnfromexperience
❖ This is NOT offline planning! You actually take actions in the world.

Direct Evaluation
❖ Goal: Compute values for each state under p
❖ Idea: Average together observed sample values
❖ Act according to p
❖ Every time you visit a state, write down what the sum of
discounted rewards turned out to be
❖ Average those samples
❖ This is called direct evaluation

Example: Direct Evaluation
Input Policy p
Observed Episodes (Training)
Output Values
Assume: g = 1
B, east, C, -1 C, east, D, -1 D, exit, x, +10
E, north, C, -1 C, east, D, -1 D, exit, x, +10
B, east, C, -1 C, east, D, -1 D, exit, x, +10
E, north, C, -1 C, east, A, -1 A, exit, x, -10

Problems with Direct Evaluation
❖ What’s good about direct evaluation?
❖ It’s easy to understand
❖ It doesn’t require any knowledge of T, R
❖ It eventually computes the correct average
values, using just sample transitions
❖ What’s bad about it?
❖ It wastes information about state
connections
❖ Each state must be learned separately
❖ So, it takes a long time to learn
Output Values
If B and E both go to C under this policy, how can their values be different?

Why Not Use Policy Evaluation?
❖ Simplified Bellman updates calculate V for a fixed policy:
❖ Each round, replace V with a one-step-look-ahead layer over V
❖ This approach fully exploited connections between the states
❖ Unfortunately, we need T and R to do it!
❖ Key question: how can we do this update to V without knowing T and R?
❖ In other words, how do we take a weighted average without knowing the weights?

Sample-Based Policy Evaluation?
❖ We want to improve our estimate of V by computing these averages:
❖ Idea: Take samples of outcomes s’ (by doing the
action!) and average
p(s) s, p(s)
s, p(s),s’
s2′ s1′ s3′
Almost! But we can’t rewind time to get sample after sample from state s.

Temporal Difference Learning
❖ Big idea: learn from every experience!
❖ Update V(s) each time we experience a transition (s, a, s’, r)
❖ Likely outcomes s’ will contribute updates more often
❖ Temporal difference learning of values
❖ Policy still fixed, still doing evaluation!
❖ Move values toward value of whatever successor occurs: running average
Sample of V(s): Update to V(s):
Same update:

Exponential Moving Average
❖ Exponential moving average
❖ The running interpolation update:
❖ Makes recent samples more important:
❖ Forgets about the past (distant past values were wrong anyway)
❖ Decreasing learning rate (𝛼) can give converging averages

Example: Temporal Difference Learning
States Observed Transitions
B, east, C, -2 C, east, D, -2
Assume: g = 1, α = 1/2

Problems with TD Value Learning
❖ TD value learning is a model-free way to do policy evaluation, mimicking Bellman updates with running sample averages
❖ However, if we want to turn values into a (new) policy, we’re sunk:
❖ Idea: learn Q-values, not values
❖ Makes action selection model-free too!

How to Solve RL Problems?
❖ In online RL, the agent interacts with the world ❖ It learns from transitions 𝑠! , 𝑎! , 𝑟! , 𝑠!”#
❖ Model-based learning
❖ Model-free learning
❖ Policy evaluation
❖ Policy learning

Active Reinforcement Learning
❖ Full reinforcement learning: optimal policies (like value iteration)
❖ Y ou don’t know the transitions T(s,a,s’)
❖ Youdon’tknowtherewardsR(s,a,s’)
❖ Y ou choose the actions now
❖ Goal: learn the optimal policy / values
❖ In this case:
❖ Learnermakeschoices!
❖ Fundamental tradeoff: exploration vs. exploitation
❖ This is NOT offline planning! You actually take actions in the world and find out what happens…

Detour: Q-Value Iteration
❖ Value iteration: find successive (depth-limited) values
❖ Start with V0(s) = 0, which we know is right
❖ Given Vk, calculate the depth k+1 values for all states:
❖ But Q-values are more useful, so compute them instead
❖ Start with Q0(s,a) = 0, which we know is right
❖ Given Qk, calculate the depth k+1 q-values for all q-states:

Q-Learning
❖ Q-Learning: sample-based Q-value iteration
❖ Learn Q(s,a) values as you go
❖ Receive a sample (s,a,r,s’)
❖ Consider your old estimate:
❖ Consider your new sample estimate:
❖ Incorporate the new estimate into a running average:

Q-Learning Pseudo-Code
❖ Most classic model-free RL algorithm
❖ Learn 𝑄∗(𝑠, 𝑎) from observed transitions s, a, r, s’
❖ Choose action 𝑎 in state 𝑠 and observe r and 𝑠’
❖ Update current estimate of 𝑄∗(𝑠, a)
❖ Exploit by using greedy policy from 𝑄∗(𝑠, 𝑎)
❖ Need to explore as T, R are unknown

Q-Learning Properties
❖ Amazing result: Q-learning converges to optimal policy — even if you’re acting suboptimally!
❖ This is called off-policy learning ❖ Caveats:
❖ You have to explore enough
❖ You have to eventually make the learning rate small enough
❖ … but not decrease it too quickly
❖ Basically, in the limit, it doesn’t matter how you select actions (!)

How to Explore?
❖ Several schemes for forcing exploration
Simplest: random actions (e-greedy)
❖ Every time step, flip a coin
❖ With (small) probability e, act randomly
❖ With (large) probability 1-e, act on current policy
Problems with random actions?
❖ You do eventually explore the space, but keep thrashing around once learning is done
❖ One solution: lower e over time

Video of Demo Q-Learning Auto

Exploration Function
❖ When to explore?
❖ Random actions: explore a fixed amount
❖ Better idea: explore areas whose badness is not (yet) established, eventually stop exploring
❖ Exploration function
❖ Takes a value estimate u and a visit count n, and
returns an optimistic utility, e.g.
Regular Q-Update: Modified Q-Update:
❖ Note: this propagates the “bonus” back to states that lead to unknown states as well!

How to Solve Large RL Problems?
Basic Q-Learning keeps a table of all q-values
In realistic situations, we cannot possibly learn about every single state!
❖ Too many states to visit them all in training
❖ Too many states to hold the q-tables in memory
Instead, we want to generalize:
❖ Learn about some small number of training states from
experience
❖ Generalize that experience to new, similar situations
❖ This is a fundamental idea in machine learning, and we’ll see it again
Approximate Q-Learning Policy Search

Example: Pacman
Let’s say we discover through experience that this state is bad:
In naïve q-learning, we know nothing about this state:
Or even this one!

Feature-Based Representations
❖ Solution: describe a state using a vector of features (properties)
❖ Features are functions from states to real numbers (often 0/1) that capture important properties of the state
❖ Example features:
❖ Distance to closest ghost
❖ Distance to closest dot
❖ Number of ghosts
❖ 1 / (dist to dot)2
❖ Is Pacman in a tunnel? (0/1)
❖ …… etc.
❖ Can also describe a q-state (s, a) with features (e.g. action moves closer to food)

Linear Value Functions
❖ Using a feature representation, we can write a q function (or value function) for any state using a few weights:
❖ Advantage: our experience is summed up in a few powerful numbers
❖ Disadvantage: states may share features but actually be very different in value!

Approximate Q-Learning
❖ Q-learning with linear Q-functions:
❖ Intuitive interpretation:
❖ Adjust weights of active features
Exact Q’s Approximate Q’s
❖ E.g., if something unexpectedly bad happens, blame the features that were on: disprefer all states with that state’s features
❖ Formal justification: online least squares

Example: Q-Pacman

Q-Learning and Least Squares

Linear Approximation: Regression*
26 24 22 20
Prediction:
Prediction:

Optimization: Least Squares*
Observation Prediction
Error or “residual”

Minimizing Error*
Imagine we had only one point x, with features f(x), target value y, and weights w:
Approximate q update explained:
“target” “prediction”

More Powerful Function Approximation
❖ 𝑄(𝑠,𝑎)=𝑤!𝑓!(𝑠,𝑎)+𝑤”𝑓”(𝑠,𝑎)+⋯+𝑤#𝑓#(𝑠,𝑎)
❖ Polynomial:
❖ 𝑄(𝑠, 𝑎) = 𝑤!!𝑓!(𝑠, 𝑎) + 𝑤!”𝑓!”(𝑠, 𝑎) + 𝑤!$𝑓!$(𝑠, 𝑎) + ⋯
❖ Neural Network:
❖ 𝑄(𝑠,𝑎)=𝑤!𝑓!(𝑠,𝑎)+𝑤”𝑓”(𝑠,𝑎)+⋯+𝑤#𝑓#(𝑠,𝑎)
❖ where𝑓!’sarealsolearned
❖ 𝑤”←𝑤”+𝛼(𝑟+𝛾𝑚𝑎𝑥𝑄(𝑠$,𝑎$)−𝑄(𝑠,𝑎))%& (𝑠,𝑎) # %’%

Overfitting: Why Limiting Capacity Can Help*
30 25 20 15 10
0 2 4 6 8 10 12 14 16 18 20
Degree 15 polynomial

How to Solve Large RL Problems?
Basic Q-Learning keeps a table of all q-values
In realistic situations, we cannot possibly learn about every single state!
❖ Too many states to visit them all in training
❖ Too many states to hold the q-tables in memory
Instead, we want to generalize:
❖ Learn about some small number of training states from
experience
❖ Generalize that experience to new, similar situations
❖ This is a fundamental idea in machine learning, and we’ll see it again
Approximate Q-Learning Policy Search

Policy Search
❖ Problem: often the feature-based policies that work well (win games, maximize utilities) aren’t the ones that approximate V / Q best
❖ E.g. your value functions from project 3 are probably horrible estimates of future rewards, but they still produced good decisions
❖ Q-learning’s priority: get Q-values close (modeling)
❖ Action selection priority: get ordering of Q-values right (prediction)
❖ Solution: learn policies that maximize rewards, not the values that predict them
❖ Policy search: start with an ok solution (e.g. Q-learning) then fine- tune by hill climbing on feature weights

Policy Search*
❖ Simplest policy search:
❖ Start with an initial linear value function or Q-function
❖ Nudge each feature weight up and down and see if your policy is better than before
❖ Problems:
❖ How do we tell the policy got better?
❖ Need to run many sample episodes!
❖ If there are a lot of features, this can be impractical
❖ Better methods exploit lookahead structure, sample wisely, change multiple parameters…

Policy Search

❖ Even if you learn the optimal policy, you still make mistakes along the way!
❖ Regret is a measure of your total mistake cost: the difference between your (expected) rewards, including youthful suboptimality, and optimal (expected) rewards
❖ Minimizing regret goes beyond learning to be optimal – it requires optimally learning to be optimal
❖ Example: random exploration and exploration functions both end up optimal, but random exploration has higher regret

Concluding Remarks
❖ Reinforcement Learning
❖ Very general learning problem.
❖ Algorithms
❖ TD learning, Q-learning
❖ Approximate Q-Learning, Policy Search
❖ For more information:
❖ AIMA, Chapter 21 for Reinforcement Learning

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com