CSC 311: Introduction to Machine Learning
Lecture 3 – Bagging, Linear Models I
Roger G. of Toronto, Fall 2021
Intro ML (UofT) CSC311-Lec3 1 / 49
Copyright By PowCoder代写 加微信 powcoder
Today we will introduce ensembling methods that combine multiple models and can perform better than the individual members.
I We’ve seen many individual models (KNN, decision trees) We will see bagging:
I Train models independently on random “resamples” of the training data.
We will introduce linear regression, our first parametric learning algorithm.
I This will exemplify how we’ll think about learning algorithms for the rest of the course.
Intro ML (UofT) CSC311-Lec3
Bias/Variance Decomposition
Recall, we treat predictions y at a query x as a random variable (where the randomness comes from the choice of dataset), y⋆ is the optimal deterministic prediction, t is a random target sampled from the true conditional p(t|x).
E[(y − t)2] = (y⋆ − E[y])2 + Var(y) + Var(t) | {z } |{z} |{z}
bias variance Bayes error
Bias/variance decomposes the expected loss into three terms:
I bias: how wrong the expected prediction is (corresponds to underfitting)
I variance: the amount of variability in the predictions (corresponds to overfitting)
I Bayes error: the inherent unpredictability of the targets
Even though this analysis only applies to squared error, we often loosely use “bias” and “variance” as synonyms for “underfitting” and “overfitting”.
Intro ML (UofT) CSC311-Lec3 3 / 49
Bias/Variance Decomposition: Another Visualization
We can visualize this decomposition in output space, where the axes correspond to predictions on the test examples.
If we have an overly simple model (e.g. KNN with large k), it might have
I high bias (because it cannot capture the structure in the data)
I low variance (because there’s enough data to get stable estimates)
Intro ML (UofT) CSC311-Lec3 4 / 49
Bias/Variance Decomposition: Another Visualization
If you have an overly complex model (e.g. KNN with k = 1), it might have
I low bias (since it learns all the relevant structure)
I high variance (it fits the quirks of the data you happened to sample)
Intro ML (UofT) CSC311-Lec3 5 / 49
Bias/Variance Decomposition: Another Visualization
The following graphic summarizes the previous two slides:
What doesn’t this capture?
A: Bayes error
Intro ML (UofT) CSC311-Lec3 6 / 49
Bagging: Motivation
Suppose we could somehow sample m independent training sets from psample.
We could then compute the prediction yi based on each one, and take the average y = m1 Pmi=1 yi.
How does this affect the three terms of the expected loss?
I Bayes error: unchanged, since we have no control over it
I Bias: unchanged, since the averaged prediction has the same
expectation
“1Xm # E[y]=E m yi =E[yi]
I Variance: reduced, since we’re averaging over independent
Var[y] = Var m yi = m2
Intro ML (UofT) CSC311-Lec3
” 1 Xm # 1 Xm
Var[yi] = m Var[yi].
Bagging: The Idea
In practice, the sampling distribution psample is often finite or expensive to sample from.
So training separate models on independently sampled datasets is very wasteful of data!
I Why not train a single model on the union of all sampled datasets? Solution: given training set D, use the empirical distribution pD as a
proxy for psample. This is called bootstrap aggregation, or bagging .
I Take a single dataset D with n examples.
I Generate m new datasets (“resamples” or “bootstrap samples”),
each by sampling n training examples from D, with replacement.
I Average the predictions of models trained on each of these datasets.
The bootstrap is one of the most important ideas in all of statistics! I Intuition: As |D| → ∞, we have pD → psample.
Intro ML (UofT) CSC311-Lec3 8 / 49
in this example n = 7, m = 3
Intro ML (UofT) CSC311-Lec3 9 / 49
predicting on a query point x
Intro ML (UofT) CSC311-Lec3 10 / 49
Bagging for Binary Classification
If our classifiers output real-valued probabilities, zi ∈ [0, 1], then we can average the predictions before thresholding:
mz! ybagged = I(zbagged > 0.5) = I Xi=1 mi > 0.5
If our classifiers output binary decisions, yi ∈ {0, 1}, we can still average the predictions before thresholding:
my! ybagged = I Xi=1 mi > 0.5
This is the same as taking a majority vote.
A bagged classifier can be stronger than the average underyling model.
I E.g., individual accuracy on “Who Wants to be a Millionaire” is only so-so, but “Ask the Audience” is quite effective.
Intro ML (UofT) CSC311-Lec3 11 / 49
Bagging: Effect of Correlation
Problem: the datasets are not independent, so we don’t get the 1/m variance reduction.
I Possible to show that if the sampled predictions have variance σ2 and correlation ρ, then
1 Xm ! 1 2 2 Varm yi =m(1−ρ)σ+ρσ.
Ironically, it can be advantageous to introduce additional variability into your algorithm, as long as it reduces the correlation between samples.
I Intuition: you want to invest in a diversified portfolio, not just one stock.
I Can help to use average over multiple algorithms, or multiple configurations of the same algorithm.
Intro ML (UofT) CSC311-Lec3 12 / 49
Random Forests
Random forests = bagged decision trees, with one extra trick to decorrelate the predictions
I When choosing each node of the decision tree, choose a random set of d input features, and only consider splits on those features
Random forests are probably the best black-box machine learning algorithm — they often work well with no tuning whatsoever.
I one of the most widely used algorithms in Kaggle competitions
Intro ML (UofT) CSC311-Lec3 13 / 49
Bagging Summary
Bagging reduces overfitting by averaging predictions. Used in most competition winners
I Even if a single model is great, a small ensemble usually helps. Limitations:
I Does not reduce bias in case of squared error. I There is still correlation between classifiers.
I Random forest solution: Add more randomness. I Naive mixture (all members weighted equally).
I If members are very different (e.g., different algorithms, different data sources, etc.), we can often obtain better results by using a principled approach to weighted ensembling.
Intro ML (UofT) CSC311-Lec3 14 / 49
Linear Regression
Intro ML (UofT) CSC311-Lec3 15 / 49
Second learning algorithm of the course: linear regression. I Task: predict scalar-valued targets (e.g. stock prices)
I Architecture: linear function of the inputs
While KNN was a complete algorithm, linear regression exemplifies a modular approach that will be used throughout this course:
I choose a model describing the relationships between variables of interest
I define a loss function quantifying how bad the fit to the data is
I choose a regularizer saying how much we prefer different candidate
models (or explanations of data)
I fit a model that minimizes the loss function and satisfies the
constraint/penalty imposed by the regularizer, possibly using an
optimization algorithm
Mixing and matching these modular components give us a lot of new ML methods.
Intro ML (UofT) CSC311-Lec3 16 / 49
Supervised Learning Setup
In supervised learning:
There is input x ∈ X , typically a vector of features (or covariates)
There is target t ∈ T (also called response, outcome, output, class)
Objectiveistolearnafunctionf:X→T suchthatt≈y=f(x) based on some data D = {(x(i), t(i)) for i = 1, 2, …, N}.
Intro ML (UofT) CSC311-Lec3 17 / 49
Linear Regression – Model
Model: In linear regression, we use a linear function of the features
x = (x1,…,xD) ∈ RD to make predictions y of the target value t ∈ R:
y=f(x)=Xwjxj +b j
I y is the prediction
I w is the weights
I b is the bias (or intercept)
w and b together are the parameters
We hope that our prediction is close to the target: y ≈ t.
Intro ML (UofT) CSC311-Lec3
What is Linear? 1 feature vs D features
2.0 Fitted line Data
1.5 1.0 0.5 0.0 0.5 1.0
21012 x: features
If we have only 1 feature:
y = wx + b where w, x, b ∈ R.
y is linear in x.
If we have D features:
y = w⊤x + b where w, x ∈ RD, b∈R
y is linear in x.
Relation between the prediction y and inputs x is linear in both cases. Intro ML (UofT) CSC311-Lec3 19 / 49
y: response
Linear Regression – Loss Function
A loss function L(y, t) defines how bad it is if, for some example x, the algorithm predicts y, but the target is actually t.
Squared error loss function:
L ( y , t ) = 12 ( y − t ) 2
y − t is the residual, and we want to make this small in magnitude The 21 factor is just to make the calculations convenient.
Cost function: loss function averaged over all training examples
J(w,b)= 1 XN y(i)−t(i)2 2N i=1
= 1 XN w⊤x(i) + b − t(i)2 2N i=1
Terminology varies. Some call “cost” empirical or average loss. Intro ML (UofT) CSC311-Lec3
Vectorization
Intro ML (UofT) CSC311-Lec3 21 / 49
Vectorization
The prediction for one data point can be computed using a for loop:
Excessive super/sub scripts are hard to work with, and Python loops are slow, so we vectorize algorithms by expressing them in terms of vectors and matrices.
w = (w1,…,wD)⊤ x = (x1,…,xD)⊤ y = w⊤x + b
This is simpler and executes much faster:
Intro ML (UofT) CSC311-Lec3 22 / 49
Vectorization
Why vectorize?
The equations, and the code, will be simpler and more readable. Gets rid of dummy variables/indices!
Vectorized code is much faster
I Cut down on Python interpreter overhead
I Use highly optimized linear algebra libraries (hardware support)
I Matrix multiplication very fast on GPU (Graphics Processing Unit)
Switching in and out of vectorized form is a skill you gain with practice Some derivations are easier to do element-wise
Some algorithms are easier to write/understand using for-loops and vectorize later for performance
Intro ML (UofT) CSC311-Lec3 23 / 49
Vectorization
We can organize all the training examples into a design matrix X with one row per training example, and all the targets into the target vector t.
Computing the predictions for the whole dataset: wTx(1)+b y(1)
.. Xw+b1= . = . =y
wTx(N) +b y(N) Intro ML (UofT) CSC311-Lec3
Vectorization
Computing the squared error cost across the whole dataset: y = Xw + b1
J= 1∥y−t∥2 2N
Sometimes we may use J = 12 ∥y − t∥2, without a normalizer. This would correspond to the sum of losses, and not the averaged loss. The minimizer does not depend on N (but optimization might!). We can also add a column of 1’s to design matrix, combine the bias and the weights, and conveniently write
[x(1) ]⊤ [x(2)]⊤
Then, our predictions reduce to y = Xw. Intro ML (UofT) CSC311-Lec3
D+1 and w = w2 ∈ R
X = . ∈ R 1.
Optimization
Intro ML (UofT) CSC311-Lec3 26 / 49
Solving the Minimization Problem
We defined a cost function J (w). This is what we’d like to minimize.
Recall from calculus: the minimum of a smooth function (if it exists) occurs at a critical point, i.e. point where the derivative is zero.
multivariate generalization: set the partial derivatives ∂J /∂wj to zero.
Equivalently, we can set the gradient to zero. The gradient is the vector of partial derivatives:
∂J ∇wJ=∂w= .
∂J ∂w1 .
Solutions may be direct or iterative
Sometimes we can directly find provably optimal parameters (e.g. set the gradient to zero and solve in closed form). We call this a direct solution. Iterative solution methods repeatedly apply an update rule that
gradually takes us closer to the soltuion.
Intro ML (UofT) CSC311-Lec3 27 / 49
Direct Solution: Calculus
Lets consider a cartoon visualization of J (w) where w is single dimensional
Left We seek w = w∗ that minimizes J (w)
Right The gradients of a function can tell us where the maxima and
minima of functions lie
Strategy: Write down an algebraic expression for ∇wJ(w). Set
equation to 0. Solve for w
J (w) J (w)
rwJ(w) < 0 rwJ(w) > 0
rwJ(w) = 0
Intro ML (UofT) CSC311-Lec3
Direct Solution: Calculus
We seek w to minimize J (w) = 12 ∥Xw − t∥2
Taking the gradient with respect to w (see course notes for
additional details) and setting it to 0, we get: ∇wJ (w) = X⊤Xw − X⊤t = 0
Optimal weights:
w∗ = (X⊤X)−1X⊤t
Linear regression is one of only a handful of models in this course
that permit direct solution.
Intro ML (UofT) CSC311-Lec3 29 / 49
Iterative solution: Gradient Descent
Most optimization problems we cover in this course don’t have a direct solution.
Now let’s see a second way to minimize the cost function which is more broadly applicable: gradient descent.
Gradient descent is an iterative algorithm, which means we apply an update repeatedly until some criterion is met.
We initialize the weights to something reasonable (e.g. all zeros) and repeatedly adjust them in the direction of steepest descent.
Intro ML (UofT) CSC311-Lec3 30 / 49
Gradient Descent
I if ∂J /∂wj > 0, then increasing wj increases J . I if ∂J /∂wj < 0, then increasing wj decreases J .
The following update always decreases the cost function for small enough α (unless ∂J /∂wj = 0):
wj ← wj − α ∂J ∂wj
α > 0 is a learning rate (or step size). The larger it is, the faster w changes.
I We’ll see later how to tune the learning rate, but values are typically small, e.g. 0.01 or 0.0001.
I If cost is the sum of N individual losses rather than their average, smaller learning rate will be needed (α′ = α/N).
Intro ML (UofT) CSC311-Lec3 31 / 49
Gradient Descent
This gets its name from the gradient. Recall the definition:
∂J ∇wJ=∂w= .
∂J ∂w1 .
I This is the direction of fastest increase in J . Update rule in vector form:
w ← w − α∂J ∂w
And for linear regression we have:
(y(i) − t(i)) x(i)
So gradient descent updates w in the direction of fastest decrease.
Observe that once it converges, we get a critical point, i.e. ∂J = 0.
Intro ML (UofT) CSC311-Lec3 32 / 49
Gradient Descent for Linear Regression
Even for linear regression, where there is a direct solution, we sometimes need to use GD.
Why gradient descent, if we can find the optimum directly?
I GD can be applied to a much broader set of models
I GD can be easier to implement than direct solutions
I For regression in high-dimensional space, GD is more efficient than
direct solution
I Linear regression solution: (X⊤ X)−1 X⊤ t
I Matrix inversion is an O(D3) algorithm
I Each GD update costs O(N D)
I Or less with stochastic gradient descent (SGD, covered next week) I Huge difference if D ≫ 1
(UofT) CSC311-Lec3 33 / 49
Feature Mappings
Intro ML (UofT) CSC311-Lec3 34 / 49
Feature Mapping (Basis Expansion)
The relation between the input and output may not be linear.
We can still use linear regression by mapping the input features to another space using feature mapping (or basis expansion).
ψ(x) : RD → Rd and treat the mapped feature (in Rd) as the input of a linear regression procedure.
Let us see how it works when x ∈ R and we use a polynomial feature mapping.
Intro ML (UofT) CSC311-Lec3 35 / 49
Polynomial Feature Mapping
If the relationship doesn’t look linear, we can fit a polynomial.
Fit the data using a degree-M polynomial function of the form:
M y=w0+w1x+w2x2+…+wMxM =Xwixi
Here the feature mapping is ψ(x) = [1, x, x2, …, xM ]⊤.
We can still use linear regression to find w since y = ψ(x)⊤w is linear in w0, w1, ….
In general, ψ can be any function. Another example:
ψ(x) = [1, sin(2πx), cos(2πx), sin(4πx), …]⊤.
Intro ML (UofT) CSC311-Lec3 36 / 49
Polynomial Feature Mapping with M = 0
-Pattern Recognition and Machine Learning, .
Intro ML (UofT)
CSC311-Lec3 37 / 49
Polynomial Feature Mapping with M = 1
y = w0 + w1x
-Pattern Recognition and Machine Learning, .
Intro ML (UofT)
CSC311-Lec3 38 / 49
Polynomial Feature Mapping with M = 3
y = w0 + w1x + w2x2 + w3x3 1
-Pattern Recognition and Machine Learning, .
Intro ML (UofT)
CSC311-Lec3 39 / 49
Polynomial Feature Mapping with M = 9
y = w0 + w1x + w2x2 + w3x3 + . . . + w9x9 1
Intro ML (UofT)
CSC311-Lec3 40 / 49
-Pattern Recognition and Machine Learning, .
Model Complexity and Generalization
Underfitting (M=0): model is too simple — does not fit the data. Overfitting (M=9): model is too complex — fits perfectly.
0x10x1 Good model (M=3): Achieves small test error (generalizes well).
Intro ML (UofT)
CSC311-Lec3
Model Complexity and Generalization
As M increases, the magnitude of coefficients gets larger.
For M = 9, the coefficients have become finely tuned to the data. Between data points, the function exhibits large oscillations.
Intro ML (UofT) CSC311-Lec3 42 / 49
Regularization
Intro ML (UofT) CSC311-Lec3 43 / 49
Regularization
The degree M of the polynomial controls the model’s complexity. The value of M is a hyperparameter for polynomial expansion,
just like k in KNN. We can tune it using a validation set.
Restricting the number of parameters / basis functions (M) is a crude approach to controlling the model complexity.
Another approach: keep the model large, but regularize it
I Regularizer: a function that quantifies how much we prefer one hypothesis vs. another
Intro ML (UofT) CSC311-Lec3 44 / 49
L2 (or l2 ) Regularization
We can encourage the weights to be small by choosing as our
regularizer the L2 penalty.
R ( w ) = 21 ∥ w ∥ 2 2 = 21 X w j2 .
I Note: To be precise, the L2 norm ∥w∥2 is Euclidean distance, so we’re regularizing the squared L2 norm.
The regularized cost function makes a tradeoff between fit to the data and the norm of the weights.
Jreg(w) = J (w) + λR(w) = J (w) + λ X wj2 2j
If you fit training data poorly, J is large. If the weights are large in magnitude, R is large.
Large λ penalizes weight values more.
λ is a hyperparameter we can tune with a validation set.
Intro ML (UofT) CSC311-Lec3 45 / 49
L2 (or l2 ) Regularization The geometric picture:
Intro ML (UofT) CSC311-Lec3 46 / 49
L2 Regularized Least Squares: Ridge regression For the least squares problem, we have J (w) = 1 ∥Xw − t∥2.
When λ > 0 (with regularization), regularized cost gives
wRidge=argminJreg(w)=argmin 1 ∥Xw−t∥2+λ∥w∥2 λ w w2N 2
=(X⊤X + λNI)−1X⊤t
The case λ = 0 (no regularization) reduces to least squares
Note that it is also common to formulate this problem as
argminw 12 ∥Xw − t∥2 + λ2 ∥w∥2 in which case the solution is wRidge = (X⊤X + λI)−1X⊤t.
Intro ML (UofT) CSC311-Lec3
Gradient Descent under the L2 Regularization Gradient descent update to minimize J :
w←w−α∂J ∂w
The gradient descent update to minimize the L2 regularized cost J + λR results in weight decay:
w ← w − α ∂ (J + λR) ∂w
∂J ∂R =w−α ∂w+λ∂w
∂J =w−α ∂w+λw
= (1 − αλ)w − α∂J ∂w
CSC311-Lec3 48 / 49
Conclusion so far
Linear regression exemplifies recurring themes of this course: choose a model and a loss function
formulate an optimization problem
solve the minimization problem using one of two strategies
I direct solution (set derivatives to zero) I gradient descent
vectorize the algorithm, i.e. represent in terms of linear algebra make a linear model more powerful using features
improve the generalization by adding a regularizer
Intro ML (UofT) CSC311-Lec3
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com