CS计算机代考程序代写 COMS 4771 Regression

COMS 4771 Regression
Nakul Verma

Last time…
• Support Vector Machines
• Maximum Margin formulation
• Constrained Optimization
• Lagrange Duality Theory
• Convex Optimization
• SVM dual and Interpretation
• How get the optimal solution

Learning more Sophisticated Outputs
So far we have focused on classification f : X → {1, …, k}
What about other outputs?
• PM2.5 (pollutant) particulate matter exposure estimate:
Input: # cars, temperature, etc. Output: 50 ppb
• Pose estimation
• Sentence structure estimate:

Regression
We’ll focus on problems with real number outputs (regression problem):
Example:
Next eruption time of old faithful geyser (at Yellowstone)

Regression Formulation for the Example
Given x, want to predict an estimate ŷ of y, which minizes the discrepancy (L) between ŷ and y.
Loss
Absolute error Squared error
A linear predictor f, can be defined by the slope w and the intercept w0 :
which minimizes the prediction loss.
How is this different from classification?

Parametric vs non-parametric Regression
If we assume a particular form of the regressor:
Parametric regression
Goal: to learn the parameters which yield the minimum error/loss
If no specific form of regressor is assumed:
Non-parametric regression
Goal: to learn the predictor directly from the input data that yields the minimum error/loss

Linear Regression
Want to find a linear predictor f, i.e., w (intercept w0 absorbed via lifting): which minimizes the prediction loss over the population.
We estimate the parameters by minimizing the corresponding loss on the training data:
(Geometrically)
for squared error

Linear Regression: Learning the Parameters
Linear predictor with squared loss:
… x1 … y1 … xi … w – yi
2
… xn …
Can take the gradient and examine the stationary points!
yn
Unconstrained problem!
Why need not check the second order conditions?

Linear Regression: Learning the Parameters
Best fitting w:
At a stationary point
Pseudo-inverse
Also called the Ordinary Least Squares (OLS)
The solution is unique and stable when XTX is invertible
What is the interpretation of this solution?

Linear Regression: Geometric Viewpoint
Consider the column space view of data X:
Find a w, such that the linear combination of minimizes
… x1 … … xi … … xn …
Say ŷ is the ols solution, ie,
Thus, ŷ is the orthogonal projection
of y onto the ! wols forms the coefficients of ŷ
projection matrix Π residual
Π(y)

Linear Regression: Statistical Modeling View
Let’s assume that data is generated from the following process:
• A example xi is draw independently from the data space X
• yclean is computed as (w . xi), from a fixed unknown w
• yclean is corrupted from by adding independent Gaussian noise N(0,σ2)
• (xi, yi) is revealed as the ith sample

Linear Regression: Statistical Modeling View
How can we determine w, from Gaussian noise corrupted observations? Observation:
parameter
How to estimate parameters of a Gaussian?
Let’s try Maximum Likelihood Estimation!
ignoring terms independent of w
optimizing for w yields the same ols result!
What happens if we model each yi with indep. noise of different variance?

Linear Regression for Classification?
Linear regression seems general, can we use it to derive a binary classifier? Let’s study 1-d data:
X Y
X
Problem #1: Where is y? for regression.
Y=1 Y=0
Problem #2: Not really linear!
Perhaps it is linear in some transformed coordinates?

Linear Regression for Classification
Y
Y=1
Y=0 Interpretation:
Sigmoid a better model!
X
Binary predictor:
For an event that occurs with probability P, the odds of that event is:
Consider the “log” of the odds
For an event with P=0.9, odds = 9 But, for an event P=0.1, odds = 0.11 (very asymmetric)
Symmetric!

Logistic Regression
Model the log-odds or logit with linear function!
Y=1 Y=0
Sigmoid!
Y
OK, we have a model, how do we learn the parameters?
X

Logistic Regression: Learning Parameters
Given samples (yi ∈{0,1} binary) Binomial
Now, use logistic model!
Can take the derivative and analyze stationary points, unfortunately no closed form solution
(use iterative methods like gradient descent to find the solution)

Linear Regression: Other Variations
Back to the ordinary least squares (ols):
Often poorly behaved when XTX not invertible
Additionally how can we incorporate prior knowledge? • perhaps want w to be sparse. Lasso regression
• perhaps want to simple w.
Ridge regression

Ridge Regression
Objective
reconstruction error
‘regularization’ parameter
The ‘regularization’ helps avoid overfitting, and always resulting in a unique solution.
Equivalent to the following optimization problem:
Why?
Geometrically:

Lasso Regression
Objective
‘lasso’ penalty
no closed form solution
Lasso regularization encourages sparse solutions. Equivalent to the following optimization problem:
Why? How can we find the solution?
Geometrically:

What About Optimality?
Linear regression (and variants) is great, but what can we say about the best possible estimate?
Can we construct an estimator for real outputs that parallels Bayes classifier for discrete outputs?

Optimal L2 Regressor
Best possible regression estimate at x: Theorem: for any regression estimate g(x)
Similar to Bayes classifier, but for regression.
Proof is straightforward…

Proof
Consider L2 error of g(x)
Cross term:
Why?
Therefore
Which is minimized when g(x) = f*(x)!

Non-parametric Regression
Linear regression (and variants) is great, but what if we don’t know parametric form of the relationship between the independent and
dependent variables?
Y
How can we predict value of a new test point x without model assumptions?
Idea:
ŷ = f(x) =
Average estimate Y of observed data in a local neighborhood X of x!

o o
X

Kernel Regression
Y
Want weights that emphasize local observations
Consider example localization functions:
ŷo
o
Gaussian kernel
Box kernel Triangle kernel
X
Then define:
Weighted average

Consistency Theorem
Recall: best possible regression estimate at x:
Theorem: As n → ∞, h → 0, hn → ∞, then
where is the kernel regressor with most localization kernels.
Proof is a bit tedious…

Proof Sketch
Prove for a fixed x and then integrate over (just like before)
squared bias of
variance of
Bias-variance decomposition
Sq. bias Variance
Pick

Kernel Regression
Advantages:
• Does not assume any parametric form of the regression function.
• Kernel regression is consistent
Disadvantages:
• Evaluation time complexity: O(dn)
• Need to keep all the data around!
How can we address the shortcomings of kernel regression?

kd trees: Speed Up Nonparametric Regression
k-d trees to the rescue!
Idea: partition the data in cells organized in a tree based hierarchy. (just like before)
To return an estimated value, return the average y value in a cell!

What We Learned…
• Linear Regression
• Parametric vs Nonparametric regression
• Logistic Regression for classification
• Ridge and Lasso Regression
• Kernel Regression
• Consistency of Kernel Regression
• Speeding non-parametric regression with trees

Questions?

Next time…
Statistical Theory of Learning!