程序代写代做代考 AI go game html CIS 471/571(Fall 2020): Introduction to Artificial Intelligence

CIS 471/571(Fall 2020): Introduction to Artificial Intelligence
Lecture 11: Reinforcement Learning (Part 2)
Thanh H. Nguyen
Source: http://ai.berkeley.edu/home.html

Reminder
§Project 3: Reinforcement Learning § Deadline: Nov 10th, 2020
§Homework 3: MDPs and Reinforcement Learning § Deadline: Nov 10th, 2020
Thanh H. Nguyen
11/4/20
2

Reinforcement Learning
§We still assume an MDP: §A set of states s Î S
§A set of actions (per state) A §A model T(s,a,s’)
§A reward function R(s,a,s’) §Still looking for a policy p(s)
§New twist: don’t know T or R, so must try out actions
§Big idea: Compute all averages over T using sample outcomes

The Story So Far: MDPs and RL
Known MDP: Offline Solution
Goal
Compute V*, Q*, p* Evaluate a fixed policy p
Unknown MDP: Model-Based
Goal Technique Compute V*, Q*, p* VI/PI on approx. MDP Evaluateafixedpolicyp PEonapprox.MDP
Technique
Value / policy iteration Policy evaluation
Unknown MDP: Model-Free
Goal Technique Compute V*, Q*, p* Q-learning Evaluateafixedpolicyp ValueLearning

Model-Free Learning
§Model-free (temporal difference) learning §Experience world through episodes
§Update estimates each transition
§Over time, updates will mimic Bellman updates
s
a
s, a
r
s’
a’
s’, a’
s’’

Q-Learning
§ We’d like to do Q-value updates to each Q-state: § But can’t compute this update without knowing T, R
§Instead, compute average as we go § Receive a sample transition (s,a,r,s’)
§ This sample suggests
§ But we want to average over results from (s,a) (Why?) § So keep a running average

Example
§ Two states: A, B
§ Two actions: Up, Down § Discount factor: 𝛾 = 0.5 § Learning rate: 𝛼 = 0.5 § Q(A, Down) = ?
§Q(B, Up) = ?
Thanh H. Nguyen
11/4/20
7

Q-Learning Properties
§Amazing result: Q-learning converges to optimal policy — even if you’re acting suboptimally!
§ Caveats:
§You have to explore enough
§ You have to eventually make the learning rate
small enough
§… but not decrease it too quickly
§Basically, in the limit, it doesn’t matter how you select actions (!)

Exploration vs. Exploitation

How to Explore?
§Several schemes for forcing exploration §Simplest: random actions (e-greedy)
§Every time step, flip a coin
§With (small) probability e, act randomly
§ With (large) probability 1-e, act on current policy
§Problems with random actions?
§ You do eventually explore the space, but keep
thrashing around once learning is done §One solution: lower e over time §Another solution: exploration functions

Exploration Functions
§When to explore?
§ Random actions: explore a fixed amount
§ Better idea: explore areas whose badness is not (yet) established, eventually stop exploring
§Exploration function
§ Takes a value estimate u and a visit count n, and
returns an optimistic utility, e.g.
Regular Q-Update: Modified Q-Update:
§ Note: this propagates the “bonus” back to states that lead to unknown states as well!

Regret
§ Even if you learn the optimal policy, you still make mistakes along the way!
§ Regret is a measure of your total mistake cost: the difference between your (expected) rewards, including youthful suboptimality, and optimal (expected) rewards
§ Minimizing regret goes beyond learning to be optimal – it requires optimally learning to be optimal
§ Example: random exploration and exploration functions both end up optimal, but random exploration has higher regret

Approximate Q-Learning

Generalizing Across States
§ Basic Q-Learning keeps a table of all q-values
§In realistic situations, we cannot possibly learn about every single state!
§ Too many states to visit them all in training
§ Too many states to hold the q-tables in memory
§Instead, we want to generalize:
§ Learn about some small number of training states
from experience
§ Generalize that experience to new, similar situations
§ This is a fundamental idea in machine learning, and we’ll see it over and over again

Example: Pacman
Let’s say we discover through experience that this state is bad:
In naïve q-learning, Or even this one! we know nothing
about this state:
[Demo: Q-learning – pacman – tiny – watch all (L11D5)],[Demo: Q-learning – pacman – tiny – silent train (L11D6)], [Demo: Q-learning – pacman – tricky – watch all (L11D7)]

Feature-Based Representations
§ Solution: describe a state using a vector of features (properties)
§ Features are functions from states to real numbers (often 0/1) that capture important properties of the state
§ Example features:
§ Distance to closest ghost
§ Distance to closest dot
§ Number of ghosts
§ 1 / (dist to dot)2
§ Is Pacman in a tunnel? (0/1)
§ …… etc.
§ Is it the exact state on this slide?
§ Can also describe a q-state (s, a) with features (e.g. action moves closer to food)

Linear Value Functions
§ Using a feature representation, we can write a q function (or value function) for any state using a few weights:
§ Advantage: our experience is summed up in a few powerful numbers
§ Disadvantage: states may share features but actually be very different in value!

Approximate Q-Learning
§ Q-learning with linear Q-functions:
§ Intuitive interpretation:
§ Adjust weights of active features
§ E.g., if something unexpectedly bad happens, blame the features that were on: disprefer all states with that state’s features
Exact Q’s Approximate Q’s
§ Formal justification: online least squares

Example: Q-Pacman

Q-Learning and Least Squares

Linear Approximation: Regression*
40
20
26 24 22 20
30
0 2030 0 20 1020
40
Prediction:
Prediction:
10 0
0
Thanh H. Nguyen
11/4/20
21

Optimization: Least Squares*
Observation Prediction
Error or “residual”
0
0 20
Thanh H. Nguyen
11/4/20
22

Minimizing Error*
Imagine we had only one point x, with features f(x), target value y, and weights w:
Approximate q update explained:
“target” “prediction”

Overfitting: Why Limiting Capacity Can Help*
30
25
20
15
10
5
0
-5
-10
Degree 15 polynomial
Thanh H. Nguyen
-150 2 4 6 8 10 12 14 16 18 20
11/4/20 24

Policy Search
Thanh H. Nguyen
11/4/20
25

Policy Search
§ Problem: often the feature-based policies that work well (win games, maximize utilities) aren’t the ones that approximate V / Q best
§ E.g. your value functions from project 2 were probably horrible estimates of future rewards, but they still produced good decisions
§ Q-learning’s priority: get Q-values close (modeling)
§ Action selection priority: get ordering of Q-values right (prediction)
§ We’ll see this distinction between modeling and prediction again later in the course
§ Solution: learn policies that maximize rewards, not the values that predict them
§ Policy search: start with an ok solution (e.g. Q-learning) then fine-tune by hill climbing on feature weights

Policy Search
§Simplest policy search:
§ Start with an initial linear value function or Q-function
§ Nudge each feature weight up and down and see if your policy is better than before
§ Problems:
§ How do we tell the policy got better?
§Need to run many sample episodes!
§ If there are a lot of features, this can be impractical
§Better methods exploit lookahead structure, sample wisely, change multiple parameters…

Conclusion
§ We’re done with Part I: Search and Planning!
§We’ve seen how AI methods can solve problems in:
§ Search
§ Constraint Satisfaction Problems § Games
§ Markov Decision Problems
§ Reinforcement Learning
§Next up: Part II: Uncertainty and Learning!