Homework 4: Kernel Methods
Instructions: Your answers to the questions below, including plots and mathematical work, should be submitted as a single PDF file. It’s preferred that you write your answers using software that typesets mathematics (e.g. LATEX, LYX, or MathJax via iPython), though if you need to you may scan handwritten work. You may find the minted package convenient for including source code in your LATEX document. If you are using LYX, then the listings package tends to work better.
1 Introduction
The problem set begins with a couple problems on kernel methods: the first explores what geometric information about the data is stored in the kernel matrix, and the second revisits kernel ridge regression with a direct approach, rather than using the Representer Theorem. At the end of the assignment you will find an Appendix that reviews some relevant definitions from linear algebra, and gives some review exercises (not for credit). Next we have a problem that explores an interesting way to re-express the Pegasos-style SSGD on any l2 -regularized empirical risk objective function (i.e. not just SVM). The new expression also happens to allow efficient updates in the sparse feature setting. In the next problem, we take a direct approach to kernelizing Pegasos. Finally we get to our coding problem, in which you’ll have the opportunity to see how kernel ridge regression works with different kernels on a one-dimensional, highly non-linear regression problem. There is also an optional coding problem, in which you can code a kernelized SVM and see how it works on a classification problem with a two-dimensional input space. The problem set ends with two theoretical problems. The first of these reviews the proof of the Representer Theorem. The second applies Lagrangian duality to show the equivalence of Tikhonov and Ivanov regularization (this material is optional).
2 [Optional] Kernel Matrices
The following problem will gives us some additional insight into what information is encoded in the kernel matrix.
1. [Optional] Consider a set of vectors S = {x1 , . . . , xm }. Let X denote the matrix whose rows
are these vectors. Form the Gram matrix K = XXT . Show that knowing K is equivalent to
knowing the set of pairwise distances among the vectors in S as well as the vector lengths.
[Hint: The distance between x and y is given by d(x, y) = ∥x − y∥, and the norm of a vector √T
xisdefinedas∥x∥= ⟨x,x⟩= x x.]
1
3 Kernel Ridge Regression
In lecture, we discussed how to kernelize ridge regression using the representer theorem. Here we pursue a bare-hands approach.
Suppose our input space is X =Rd and our output space is Y = R. Let D = {(x1, y1) , . . . , (xn, yn)} be a training set from X ×Y. We’ll use the “design matrix” X ∈ Rn×d, which has the input vectors
as rows:
Recall the ridge regression objective function:
J(w) = ||Xw − y||2 + λ||w||2,
for λ > 0.
1. ShowthatforwtobeaminimizerofJ(w),wemusthaveXTXw+λIw=XTy. Showthat the minimizer of J(w) is w = (XT X + λI)−1XT y. Justify that the matrix XT X + λI is invertible, for λ > 0. (The last part should follow easily from the exercises on psd and spd matrices in the Appendix.)
2. RewriteXTXw+λIw=XTyasw= 1(XTy−XTXw). Basedonthis,showthatwecan λ
write w = XT α for some α, and give an expression for α.
3. Based on the fact that w = XT α, explain why we say w is “in the span of the data.”
4. Show that α = (λI + XXT )−1y. Note that XXT is the kernel matrix for the standard vector dot product. (Hint: Replace w by XT α in the expression for α, and then solve for α.)
5. Give a kernelized expression for the Xw, the predicted values on the training points. (Hint: Replace w by XT α and α by its expression in terms of the kernel matrix XXT .)
6. Give an expression for the prediction f(x) = xT w∗ for a new point x, not in the training set. The expression should only involve x via inner products with other x’s. [Hint: It is often convenient to define the column vector
to simplify the expression.]
−x1− .
X= . . −xn −
xTx1 .
kx= . xT xn
2
4 [Optional] Pegasos and SSGD for l2-regularized ERM1 Consider the objective function
λ 2 1n
J(w) = 2∥w∥2 + n
where li(w) represents the loss on the ith training point (xi,yi). Suppose li(w) : Rd → R is a
convex function. Let’s write
li(w),
Ji(w) = λ∥w∥2 + li(w), 2
i=1
for the one-point approximation to J(w) using the ith training point. Ji(w) is probably a very poor approximation of J(w). However, if we choose i uniformly at random from 1,…,n, then we do have EJi(w) = J(w). We’ll now show that subgradients of Ji(w) are unbiased estimators of some subgradient of J(w), which is our justification for using SSGD methods.
In the problems below, you may use the following facts about subdifferentials without proof (as in Homework #3): 1) If f1,…,fm : Rd → R are convex functions and f = f1 + ··· + fm, then ∂f(x) = ∂f1(x)+···+∂fm(x) [additivity]. 2) For α ≥ 0, ∂(αf)(x) = α∂f(x) [positive homogeneity].
1. [Optional] For each i = 1,…,n, let gi(w) be a subgradient of Ji(w) at w ∈ Rd. Let vi(w) be a subgradient of li(w) at w. Give an expression for gi(w) in terms of w and vi(w)
2. [Optional] Show that Egi(w) ∈ ∂J(w), where the expectation is over the randomly selected i ∈ 1, . . . , n. (In words, the expectation of our subgradient of a randomly chosen Ji(w) is in the subdifferential of J.)
3. [Optional] Now suppose we are carrying out SSGD with the Pegasos step-size η(t) = 1/ (λt), t = 1, 2, . . .., starting from w(1) = 0. In the t’th step, suppose we select the ith point and thus take the step w(t+1) = w(t) −η(t)gi(w(t)). Let’s write v(t) = vi(w(t)), which is the subgradient of the loss part of Ji(w(t)) that is used in step t. Show that
1 t
w(t+1) = −λt
[Hint: One approach is proof by induction. First show it’s true for w(2). Then assume it’s true for w(t) and prove it’s true for w(t+1). This will prove that it’s true for all t = 2, 3, . . . by induction.
1This problem is based on Shalev-Shwartz and Ben-David’s book Understanding Machine Learning: From Theory to Algorithms, Sections 14.5.3, 15.5, and 16.3).
3
τ=1
v(τ)
(a) [Optional] We can use the previous result to get a nice equivalent formulation of Pega- sos. Let θ(t) = t−1 v(t). Then w(t+1) = − 1 θ(t+1) Then Pegasos from the previous
τ=1 λt
homework is equivalent to Algorithm 1. Similar to the w = sW decomposition from
Algorithm 1: Pegasos Algorithm Reformulation
input: Training set (x1,y1),…,(xn,yn)∈Rd ×{−1,1} and λ>0. θ(1) = (0,…,0) ∈ Rd
w(1) = (0,…,0) ∈ Rd
t=1 # step number
repeat
randomly choose j in 1,…,n if yjw(t),xj<1
θ(t+1) = θ(t) + yj xj else
θ(t+1) = θ(t) endif
w(t+1) = − 1 θ(t+1) # need not be explicitly computed λt
t=t+1
until bored
return w(t) = − 1 θ(t) λ(t−1)
homework #3, this decomposition gives the opportunity for significant speedup. Ex- plain how Algorithm 1 can be implemented so that, if xj has s nonzero entries, then we only need to do O(s) memory accesses in every pass through the loop.
5 Kernelized Pegasos
Recall the SVM objective function
λ 1m
min ∥w∥2 + max0,1−yiwTxi
w∈Rn 2 m
i=1
and the Pegasos algorithm on the training set (x1, y1) , . . . , (xn, yn) ∈ Rd × {−1, 1} (Algorithm 2). Note that in every step of Pegasos, we rescale w(t) by 1 − η(t)λ = 1 − 1 ∈ (0, 1). This
t
“shrinks” the entries of w(t) towards 0, and it’s due to the regularization term λ ∥w∥2 in the SVM
22
objective function. Also note that if the example in a particular step, say (xj,yj), is not classified with the required margin (i.e. if we don’t have margin yj wtT xj ≥ 1), then we also add a multiple of xj to w(t) to end up with w(t+1). This part of the adjustment comes from the empirical risk. Since
4
Algorithm 2: Pegasos Algorithm
input: Training set (x1,y1),...,(xn,yn)∈Rd ×{−1,1} and λ>0. w(1) = (0,…,0) ∈ Rd
t=0 # step number
repeat
t=t+1
η(t) = 1/ (tλ) # step multiplier randomly choose j in 1,…,n if yjw(t),xj<1
w(t+1) = (1 − η(t)λ)w(t) + η(t)yj xj else
w(t+1) = (1 − η(t)λ)w(t) until bored
return w(t)
we initialize with w(1) = 0, we are guaranteed that we can always write2
n
w(t) = α(t)xi
i i=1
after any number of steps t. When we kernelize Pegasos, we’ll be tracking α directly, rather than w.
1. Kernelize the expression for the margin. That is, show that yj w(t),xj = yjKj·α(t), where k(xi,xj) = ⟨xi,xj⟩ and Kj· denotes the jth row of the kernel matrix K corresponding to kernel k.
2. Suppose that w(t) = n α(t)xi and for the next step we have selected a point (xj,yj) that i=1 i
does not have a margin violation. Give an update expression for α(t+1) so that w(t+1) = n α(t+1) xi .
i=1 i
3. Repeat the previous problem, but for the case that (xj , yj ) has a margin violation. Then give the full pseudocode for kernelized Pegasos. You may assume that you receive the kernel
2Note: This resembles the conclusion of the representer theorem, but it’s saying something different. Here, we are saying that the w(t) after every step of the Pegasos algorithm lives in the span of the data. The representer theorem says that a mathematical minimizer of the SVM objective function (i.e. what the Pegasos algorithm would converge to after infinitely many steps) lies in the span of the data. If, for example, we had chosen an initial w(1) that is NOT in the span of the data, then none of the w(t)’s from Pegasos would be in the span of the data. However, we know Pegasos converges to a minimum of the SVM objective. Thus after a very large number of steps, w(t) would be very close to being in the span of the data. It’s the gradient of the regularization term that pulls us back towards the span of the data. This is basically because the regularization is driving all components towards 0, while the empirical risk updates are only pushing things away from 0 in directions in the span of the data.
5
(t)
(t) T
(t)
= (α1 , . . . , αn )
6
4.
matrix K as input, along with the labels y1, . . . , yn ∈ {−1, 1}
[Optional] While the direct implementation of the original Pegasos required updating all en- tries of w in every step, a direct kernelization of Algorithm 2, as we have done above, leads to updating all entries of α in every step. Give a version of the kernelized Pegasos algorithm that does not suffer from this inefficiency. You may try splitting the scale and direction sim- ilar to the approach of the previous problem set, or you may use a decomposition based on Algorithm 1 from the optional problem 4 above.
Kernel Methods: Let’s Implement
In this section you will get the opportunity to code kernel ridge regression and, optionally, kernelized SVM. To speed things along, we’ve written a great deal of support code for you, which you can find in the Jupyter notebooks in the homework zip file.
6.1 One more review of kernelization can’t hurt (no problems)
Consider the following optimization problem on a data set (x1, y1) , . . . (xn, yn) ∈ Rd × Y: min R⟨w,w⟩+L(⟨w,x1⟩,...,⟨w,xn⟩),
w∈Rd
where w,x1,...,xn ∈ Rd, and ⟨·,·⟩ is the standard inner product on Rd. The function R : [0,∞) → R is nondecreasing and gives us our regularization term, while L : Rn → R is arbitrary3 and gives us our loss term. We noted in lecture that this general form includes soft-margin SVM and ridge regression, though not lasso regression. Using the representer theorem, we showed if the optimization problem has a solution, there is always a solution of the form w = ni=1 αixi, for some α ∈ Rn. Plugging this into the our original problem, we get the following “kernelized” optimization problem:
α∈Rn
where K ∈ Rn×n is the Gram matrix (or “kernel matrix”) defined by Kij = k(xi,xj) = ⟨xi,xj⟩.
Predictions are given by
√T
min R α Kα +L(Kα),
n
f(x) = αik(xi,x),
i=1
and we can recover the original w ∈ Rd by w = ni=1 αixi.
The “kernel trick” is to swap out occurrences of the kernel k (and the corresponding Gram
matrix K) with another kernel. For example, we could replace k(xi,xj) = ⟨xi,xj⟩ by k′(xi,xj) = ⟨ψ(xi),ψ(xj)⟩ for an arbitrary feature mapping ψ : Rd → RD. In this case, the recovered w ∈ RD would be w = ni=1 αiψ(xi) and predictions would be ⟨w, ψ(xi)⟩.
3You may be wondering “Where are the yi’s?”. They’re built into the function L. For example, a square loss on a training set of size 3 could be represented as L(s1, s2, s3) = 1 (s1 − y1)2 + (s2 − y2)2 + (s3 − y3)3, where each si
stands for the ith prediction ⟨w,xi⟩.
3
6
More interestingly, we can replace k by another kernel k′′(xi,xj) for which we do not even know or cannot explicitly write down a corresponding feature map ψ. Our main example of this is the
RBF kernel
′ ∥x−x′∥2 k(x,x)=exp − 2σ2 ,
for which the corresponding feature map ψ is infinite dimensional. In this case, we cannot recover w since it would be infinite dimensional. Predictions must be done using α ∈ Rn, with f(x) = ni=1 αik(xi, x).
Your implementation of kernelized methods below should not make any reference to w or to a feature map ψ. Your “learning” routine should return α, rather than w, and your prediction function should also use α rather than w. This will allow us to work with kernels that correspond to infinite-dimensional feature vectors.
6.2 Kernels and Kernel Machines
There are many different families of kernels. So far we’ve spoken about linear kernels, RBF/Gaus- sian kernels, and polynomial kernels. The last two kernel types have parameters. In this section, we’ll implement these kernels in a way that will be convenient for implementing our kernelized ML methods later on. For simplicity, and because it is by far the most common situation4, we will assume that our input space is X = Rd. This allows us to represent a collection of n inputs in a matrix X ∈ Rn×d, as usual.
1. Write functions that compute the RBF kernel kRBF(σ)(x, x′) = exp −∥x − x′∥2/ 2σ2 and
the polynomial kernel kpoly(a,d)(x, x′) = (a + ⟨x, x′⟩)d. The linear kernel klinear(x, x′) = ⟨x, x′⟩, has been done for you in the support code. Your functions should take as input two matri- ces W ∈ Rn1×d and X ∈ Rn2×d and should return a matrix M ∈ Rn1×n2 where Mij = k(Wi·,Xj·). In words, the (i,j)’th entry of M should be kernel evaluation between wi (the ith row of W) and xj (the jth row of X). The matrix M could be called the “cross-kernel” matrix, by analogy to the cross-covariance matrix. For the RBF kernel, you may use the scipy function cdist(X1,X2,’sqeuclidean’) in the package scipy.spatial.distance or (with some more work) write it in terms of the linear kernel (Bauckhage’s article on calculating Euclidean distance matrices may be helpful).
2. Use the linear kernel function defined in the code to compute the kernel matrix on the set of points x0 ∈ DX = {−4, −1, 0, 2}. Include both the code and the output.
3. Suppose we have the data set D = {(−4,2),(−1,0),(0,3),(2,5)}. Then by the representer theorem, the final prediction function will be in the span of the functions x → k(x0,x) for
x0 ∈ DX = {−4, −1, 0, 2}. This set of functions will look quite different depending on the kernel function we use.
(a) Plot the set of functions x → klinear(x0, x) for x0 ∈ DX and for x ∈ [−6, 6]. (b) Plot the set of functions x → kpoly(1,3)(x0, x) for x0 ∈ DX and for x ∈ [−6, 6].
(c) Plot the set of functions x → kRBF(1)(x0, x) for x0 ∈ DX and for x ∈ [−6, 6].
4We are noting this because one interesting aspect of kernel methods is that they can act directly on an arbitrary input space X (e.g. text files, music files, etc.), so long as you can define a kernel function k : X × X → R. But we’ll not consider that case here.
7
6.3
(d) By the representer theorem, the final prediction function will be of the form f(x) = ni=1 αik(xi, x), where x1, . . . , xn ∈ X are the inputs in the training set. This is a special case of what is sometimes called a kernel machine, which is a function of the form f(x) = ri=1 αik(μi, x), where μ1, . . . , μr ∈ X are called prototypes or centroids (Murphy’s book Section 14.3.1.). In the special case that the kernel is an RBF kernel, we get what’s called an RBF Network (proposed by Broomhead and Lowe in 1988). We can see that the prediction functions we get from our kernel methods will be kernel machines in which each input in the training set x1, . . . , xn serves as a prototype point. Complete the predict function of the class Kernel_Machine in the skeleton code. Construct a Kernel_Machine object with the RBF kernel (sigma=1), with prototype points at −1, 0, 1 and corresponding weights 1, −1, 1. Plot the resulting function.
Note: For this problem, and for other problems below, it may be helpful to use partial application on your kernel functions. For example, if your polynomial kernel function has signature polynomial_kernel(W, X, offset, degree), you can write k = functools. partial(polynomial_kernel, offset=2, degree=2), and then a call to k(W,X) is equivalent to polynomial_kernel(W, X, offset=2, degree=2), the advantage being that the extra parameter settings are built into k(W,X). This can be convenient so that you can have a function that just takes a kernel function k(W,X) and doesn’t have to worry about the parameter settings for the kernel.
Kernel Ridge Regression
In the zip file for this assignment, you’ll find a training and test set, along with some skeleton code. We’re considering a one-dimensional regression problem, in which X = Y = A = R. We’ll fit this data using kernelized ridge regression, and we’ll compare the results using several different kernel functions. Because the input space is one-dimensional, we can easily visualize the results.
1. Plot the training data. You should note that while there is a clear relationship between x and y, the relationship is not linear.
2. In a previous problem, we showed that in kernelized ridge regression, the final prediction function is f(x) = ni=1 αik(xi, x), where α = (λI + K)−1y and K ∈ Rn×n is the kernel matrix of the training data: Kij = k(xi, xj ), for x1, . . . , xn. In terms of kernel machines, αi is the weight on the kernel function evaluated at the prototype point xi. Complete the function train_kernel_ridge_regression so that it performs kernel ridge regression and returns a Kernel_Machine object that can be used for predicting on new points.
3. Use the code provided to plot your fits to the training data for the RBF kernel with a fixed regularization parameter of 0.0001 for 3 different values of sigma: 0.01, 0.1, and 1.0. What values of sigma do you think would be more likely to over fit, and which less?
4. Use the code provided to plot your fits to the training data for the RBF kernel with a fixed sigma of 0.02 and 4 different values of the regularization parameter λ: 0.0001, 0.01, 0.1, and 2.0. What happens to the prediction function as λ → ∞?
5. Find the best hyperparameter settings (including kernel parameters and the regularization parameter) for each of the kernel types. Summarize your results in a table, which gives
8
training error and test error for each setting. Include in your table the best settings for each kernel type, as well as nearby settings that show that making small change in any one of the hyperparameters in either direction will cause the performance to get worse. You should use average square loss on the test set to rank the parameter settings. To make things easier for you, we have provided an sklearn wrapper for the kernel ridge regression function we have created so that you can use sklearn’s GridSearchCV. Note: Because of the small dataset size, these models can be fit extremely fast, so there is no excuse for not doing extensive hyperparameter tuning.
6. Plot your best fitting prediction functions using the polynomial kernel and the RBF kernel. Use the domain x ∈ (−0.5, 1.5). Comment on the results.
7. The data for this problem was generated as follows: A function f : R → R was chosen. Then to generate a point (x, y), we sampled x uniformly from (0, 1) and we sampled ε ∼ N 0, 0.12 (soVar(ε)=0.12). Thefinalpointis(x,f(x)+ε). WhatistheBayesdecisionfunctionand the Bayes risk for the loss function l (yˆ, y) = (yˆ − y)2.
8. [Optional] Attempt to improve performance by using different kernel functions. Chapter 4 from Rasmussen and Williams’ book Gaussian Processes for Machine Learning describes many kernel functions, though they are called covariance functions in that book (but they have exactly the same definition). Note that you may also create a kernel function by first explicitly creating feature vectors, if you are so inspired.
9. [Optional] Use any machine learning model you like to get the best performance you can.
6.4 [Optional] Kernelized Support Vector Machines with Kernelized Pe- gasos
1. [Optional] Load the SVM training and test data from the zip file. Plot the training data using the code supplied. Are the data linearly separable? Quadratically separable? What if we used some RBF kernel?
2. [Optional] Unlike for kernel ridge regression, there is no closed-form solution for SVM clas- sification (kernelized or not). Implement kernelized Pegasos. Because we are not using a sparse representation for this data, you will probably not see much gain by implementing the “optimized” versions described in the problems above.
3. [Optional] Find the best hyperparameter settings (including kernel parameters and the regu- larization parameter) for each of the kernel types. Summarize your results in a table, which gives training error and test error (i.e. average 0/1 loss) for each setting. Include in your table the best settings for each kernel type, as well as nearby settings that show that making small change in any one of the hyperparameters in either direction will cause the performance to get worse. You should use the 0/1 loss on the test set to rank the parameter settings.
4. [Optional] Plot your best fitting prediction functions using the linear, polynomial, and the RBF kernel. The code provided may help.
9
7 Representer Theorem
Recall the following theorem from lecture: Theorem (Representer Theorem). Let
J(w) = R (∥w∥) + L (⟨w, ψ(x1)⟩ , . . . , ⟨w, ψ(xn)⟩) ,
where R : R≥0 → R is nondecreasing (the regularization term) and L : Rn → R is arbitrary (the
loss term). If J(w) has a minimizer, then it has a minimizer of the form
n
w∗ = αiψ(xi).
i=1
Furthermore, if R is strictly increasing, then all minimizers have this form.
Note: There is nothing in this theorem that guarantees J(w) has a minimizer at all. If there is no minimizer, then this theorem does not tell us anything.
In this problem, we will prove the part of the Representer theorem for the case that R is strictly increasing.
8
1.
2.
3.
LetM beaclosedsubspaceofaHilbertspaceH. Foranyx∈H,letm0 =ProjMxbethe projection of x onto M. By the Projection Theorem, we know that (x − m0) ⊥ M. Then by the Pythagorean Theorem, we know ∥x∥2 = ∥m0∥2 + ∥x − m0∥2. From this we concluded in lecture that ∥m0∥ ≤ ∥x∥. Show that we have ∥m0∥ = ∥x∥ only when m0 = x. (Hint: Use the postive-definiteness of the inner product: ⟨x, x⟩ ≥ 0 and ⟨x, x⟩ = 0 ⇐⇒ x = 0, and the fact that we’re using the norm derived from such an inner product.)
Give the proof of the Representer Theorem in the case that R is strictly increasing. That is, show that if R is strictly increasing, then all minimizers have this form claimed. (Hint: Consider separately the cases that ∥w∥ < ∥w∗∥ and the case ∥w∥ = ∥w∗∥.)
[Optional] Suppose that R : R≥0 → R and L : Rn → R are both convex functions. Use properties of convex functions to show that w → L (⟨w, ψ(x1)⟩ , . . . , ⟨w, ψ(xn)⟩) is a convex function of w, and then that J(w) is also a convex function of w. For simplicity, you may assume that our feature space is Rd, rather than a generic Hilbert space. You may also use the fact that the composition of a convex function and an affine function is convex. That is, suppose f : Rn → R, A ∈ Rn×m and b ∈ Rn. Define g : Rm → R by g(x) = f (Ax + b). Then if f is convex, then so is g. From this exercise, we can conclude that if L and R are convex, then J does have a minimizer of the form w∗ = ni=1 αiψ(xi), and if R is also strictly increasing, then all minimizers of J have this form.
Ivanov and Tikhonov Regularization
In lecture there was a claim that the Ivanov and Tikhonov forms of ridge and lasso regression are equivalent. We will now prove a more general result.
10
8.1 Tikhonov optimal implies Ivanov optimal
Let φ : F → R be any performance measure of f ∈ F, and let Ω : F → [0,∞) be any
complexity measure. For example, for ridge regression over the linear hypothesis space F =
fw(x) = wT x | w ∈ Rd, we would have φ(fw) = 1 n wT xi − yi2 and Ω(fw) = wT w. n i=1
1. Suppose that for some λ ≥ 0 we have the Tikhonov regularization solution
f∗ ∈argmin[φ(f)+λΩ(f)]. (1)
8.2
f∈F
Show that f∗ is also an Ivanov solution. That is, ∃r ≥ 0 such that
f∗ ∈ argminφ(f) subject to Ω(f) ≤ r. (2) f∈F
(Hint: Start by figuring out what r should be. Then one approach is proof by contradiction: suppose f∗ is not the optimum in (2) and show that contradicts the fact that f∗ solves (1).)
[Optional] Ivanov optimal implies Tikhonov optimal (when we have Strong Duality)
For the converse, we will restrict our hypothesis space to a parametric set. That is,
F =fw(x):X →R|w∈Rd. So we will now write φ and Ω as functions of w ∈ Rd.
Let w∗ be a solution to the following Ivanov optimization problem: minimize φ(w)
subject to Ω(w) ≤ r,
for any r ≥ 0. Assume that strong duality holds for this optimization problem and that the dual solution is attained (e.g. Slater’s condition would suffice). Then we will show that there exists a λ ≥ 0 such that w∗ ∈ arg minw∈Rd [φ(w) + λΩ(w)] .
1. [Optional] Write the Lagrangian L(w, λ) for the Ivanov optimization problem.
2. [Optional] Write the dual optimization problem in terms of the dual objective function g(λ), and give an expression for g(λ). [Writing g(λ) as an optimization problem is expected - don’t try to solve it.]
3. [Optional] We assumed that the dual solution is attained, so let λ∗ ∈ arg maxλ≥0 g(λ). We also assumed strong duality, which implies φ(w∗) = g(λ∗). Show that the minimum in the expression for g(λ∗) is attained at w∗. [Hint: You can use the same approach we used when we derived that strong duality implies complementary slackness.] Conclude the proof by showing that for the choice of λ = λ∗, we have w∗ ∈ arg minw∈Rd [φ(w) + λ∗Ω(w)] .
11
8.3
4.
[Optional] The conclusion of the previous problem allows λ = 0, which means we’re not actually regularizing at all. This will happen when the constraint in the Ivanov optimization problem is not active. That is, we’ll need to take λ = 0 whenever the solution w∗ to the Ivanov optimization problem has Ω(w∗) < r. Show this. However, consider the following condition (suggested in [?]):
inf φ(w) < inf φ(w). w∈Rd {w|Ω(w)≤r}
This condition simply says that we can get a strictly smaller performance measure (e.g. we can fit the training data strictly better) if we remove the Ivanov regularization. With this additional condition, show that if λ∗ ∈ arg maxλ≥0 g(λ) then λ∗ > 0. Moreover, show that the solution w∗ satisfies Ω(w∗) = r – that is, the Ivanov constraint is active.
[Optional] Ivanov implies Tikhonov for Ridge Regression.
To show that Ivanov implies Tikhonov for the ridge regression problem (square loss with l2 regu- larization), we need to demonstrate strong duality and that the dual optimum is attained. Both of these things are implied by Slater’s constraint qualifications.
A
1. [Optional] Show that the Ivanov form of ridge regression
n
minimize yi − wT xi2 i=1
subject to wT w ≤ r.
is a convex optimization problem with a strictly feasible point, so long as r > 0. (Thus im-
plying the Ivanov and Tikhonov forms of ridge regression are equivalent when r > 0.)
Positive Semidefinite Matrices
In statistics and machine learning, we use positive semidefinite matrices a lot. Let’s recall some definitions from linear algebra that will be useful here:
Definition. A set of vectors {x1,…,xn} is orthonormal if ⟨xi,xi⟩ = 1 for any i ∈ {1,…,n} (i.e. xi has unit norm), and for any i,j ∈ {1,…,n} with i ̸= j we have ⟨xi,xj⟩ = 0 (i.e. xi and xj are orthogonal).
Note that if the vectors are column vectors in a Euclidean space, we can write this as xTi xj = 1(i ̸= j) for all i,j ∈ {1,…,n}.
Definition. A matrix is orthogonal if it is a square matrix with orthonormal columns.
It follows from the definition that if a matrix M ∈ Rn×n is orthogonal, then MT M = I, where
I isthen×nidentitymatrix. ThusMT =M−1,andsoMMT =I aswell. Definition. A matrix M is symmetric if M = M T .
12
Definition. For a square matrix M, if Mv = λv for some column vector v and scalar λ, then v is called an eigenvector of M and λ is the corresponding eigenvalue.
Theorem (Spectral Theorem). A real, symmetric matrix M ∈ Rn×n can be diagonalized as M = QΣQT , where Q ∈ Rn×n is an orthogonal matrix whose columns are a set of orthonormal eigenvectors of M, and Σ is a diagonal matrix of the corresponding eigenvalues.
Definition. A real, symmetric matrix M ∈ Rn×n is positive semidefinite (psd) if for any x ∈ Rn,
xT Mx ≥ 0.
Note that unless otherwise specified, when a matrix is described as positive semidefinite, we are implicitly assuming it is real and symmetric (or complex and Hermitian in certain contexts, though not here).
As an exercise in matrix multiplication, note that for any matrix A with columns a1, . . . , ad,
that is
we have
| |
A = a1 ··· ad ∈ Rn×d,
B
1.
2.
Use the definition of a psd matrix and the spectral theorem to show that all eigenvalues of a positive semidefinite matrix M are non-negative. [Hint: By Spectral theorem, Σ = QT M Q for some Q. What if you take A = Q in the “exercise in matrix multiplication” described above?]
In this problem, we show that a psd matrix is a matrix version of a non-negative scalar, in that they both have a “square root”. Show that a symmetric matrix M can be expressed as M=BBT forsomematrixB,ifandonlyifMispsd.[Hint:ToshowM=BBT impliesM is psd, use the fact that for any vector v, vTv ≥ 0. To show that M psd implies M = BBT for some B, use the Spectral Theorem.]
Positive Definite Matrices
aT1 Ma1 T aT2 Ma1
aT1 Ma2
aT2 Ma2
··· ···
aT1 Mad aT2 Mad
||
A MA= .
aTd Ma1 aTd Ma2 ··· aTd Mad
. ..···.
SoM ispsdifandonlyifforanyA∈Rn×d,wehavediag(ATMA)=aT1Ma1,…,aTdMadT ≽0, where ≽ is elementwise inequality, and 0 is a d × 1 column vector of 0’s .
Definition. A real, symmetric matrix M ∈ Rn×n is positive definite (spd) if for any x ∈ Rn with x ̸= 0,
xT Mx > 0.
13
. .
1. Show that all eigenvalues of a symmetric positive definite matrix are positive. [Hint: You can use the same method as you used for psd matrices above.]
2. Let M be a symmetric positive definite matrix. By the spectral theorem, M = QΣQT , where
Σ is a diagonal matrix of the eigenvalues of M. By the previous problem, all diagonal entries
of Σ are positive. If Σ = diag(σ ,…,σ ), then Σ−1 = diagσ−1,…,σ−1. Show that the 1n 1n
matrix QΣ−1 QT is the inverse of M .
3. Since positive semidefinite matrices may have eigenvalues that are zero, we see by the previous problem that not all psd matrices are invertible. Show that if M is a psd matrix and I is the identity matrix, then M + λI is symmetric positive definite for any λ > 0, and give an expression for the inverse of M + λI.
4. Let M and N be symmetric matrices, with M positive semidefinite and N positive definite. Use the definitions of psd and spd to show that M + N is symmetric positive definite. Thus M +N is invertible. (Hint: For any x ̸= 0, show that xT(M +N)x > 0. Also note that xT (M + N)x = xT Mx + xT Nx.)
14