程序代写代做代考 information theory algorithm AI Karush-Kuhn-Tucker conditions

Karush-Kuhn-Tucker conditions

Geoff Gordon & Ryan Tibshirani
Optimization 10-725 / 36-725

1

Remember duality

Given a minimization problem

min
x∈Rn

f(x)

subject to hi(x) ≤ 0, i = 1, . . .m
`j(x) = 0, j = 1, . . . r

we defined the Lagrangian:

L(x, u, v) = f(x) +

m∑
i=1

uihi(x) +
r∑
j=1

vj`j(x)

and Lagrange dual function:

g(u, v) = min
x∈Rn

L(x, u, v)

2

The subsequent dual problem is:

max
u∈Rm, v∈Rr

g(u, v)

subject to u ≥ 0

Important properties:

• Dual problem is always convex, i.e., g is always concave (even
if primal problem is not convex)

• The primal and dual optimal values, f? and g?, always satisfy
weak duality: f? ≥ g?

• Slater’s condition: for convex primal, if there is an x such that

h1(x) < 0, . . . hm(x) < 0 and `1(x) = 0, . . . `r(x) = 0 then strong duality holds: f? = g?. (Can be further refined to strict inequalities over nonaffine hi, i = 1, . . .m) 3 Duality gap Given primal feasible x and dual feasible u, v, the quantity f(x)− g(u, v) is called the duality gap between x and u, v. Note that f(x)− f? ≤ f(x)− g(u, v) so if the duality gap is zero, then x is primal optimal (and similarly, u, v are dual optimal) From an algorithmic viewpoint, provides a stopping criterion: if f(x)− g(u, v) ≤ �, then we are guaranteed that f(x)− f? ≤ � Very useful, especially in conjunction with iterative methods ... more dual uses in coming lectures 4 Dual norms Let ‖x‖ be a norm, e.g., • `p norm: ‖x‖p = ( ∑n i=1 |xi| p)1/p, for p ≥ 1 • Nuclear norm: ‖X‖nuc = ∑r i=1 σi(X) We define its dual norm ‖x‖∗ as ‖x‖∗ = max ‖z‖≤1 zTx Gives us the inequality |zTx| ≤ ‖z‖‖x‖∗, like Cauchy-Schwartz. Back to our examples, • `p norm dual: (‖x‖p)∗ = ‖x‖q, where 1/p+ 1/q = 1 • Nuclear norm dual: (‖X‖nuc)∗ = ‖X‖spec = σmax(X) Dual norm of dual norm: it turns out that ‖x‖∗∗ = ‖x‖ ... connections to duality (including this one) in coming lectures 5 Outline Today: • KKT conditions • Examples • Constrained and Lagrange forms • Uniqueness with 1-norm penalties 6 Karush-Kuhn-Tucker conditions Given general problem min x∈Rn f(x) subject to hi(x) ≤ 0, i = 1, . . .m `j(x) = 0, j = 1, . . . r The Karush-Kuhn-Tucker conditions or KKT conditions are: • 0 ∈ ∂f(x) + m∑ i=1 ui∂hi(x) + r∑ j=1 vj∂`j(x) (stationarity) • ui · hi(x) = 0 for all i (complementary slackness) • hi(x) ≤ 0, `j(x) = 0 for all i, j (primal feasibility) • ui ≥ 0 for all i (dual feasibility) 7 Necessity Let x? and u?, v? be primal and dual solutions with zero duality gap (strong duality holds, e.g., under Slater’s condition). Then f(x?) = g(u?, v?) = min x∈Rn f(x) + m∑ i=1 u?ihi(x) + r∑ j=1 v?j `j(x) ≤ f(x?) + m∑ i=1 u?ihi(x ?) + r∑ j=1 v?j `j(x ?) ≤ f(x?) In other words, all these inequalities are actually equalities 8 Two things to learn from this: • The point x? minimizes L(x, u?, v?) over x ∈ Rn. Hence the subdifferential of L(x, u?, v?) must contain 0 at x = x?—this is exactly the stationarity condition • We must have ∑m i=1 u ? ihi(x ?) = 0, and since each term here is ≤ 0, this implies u?ihi(x ?) = 0 for every i—this is exactly complementary slackness Primal and dual feasibility obviously hold. Hence, we’ve shown: If x? and u?, v? are primal and dual solutions, with zero duality gap, then x?, u?, v? satisfy the KKT conditions (Note that this statement assumes nothing a priori about convexity of our problem, i.e. of f, hi, `j) 9 Sufficiency If there exists x?, u?, v? that satisfy the KKT conditions, then g(u?, v?) = f(x?) + m∑ i=1 u?ihi(x ?) + r∑ j=1 v?j `j(x ?) = f(x?) where the first equality holds from stationarity, and the second holds from complementary slackness Therefore duality gap is zero (and x? and u?, v? are primal and dual feasible) so x? and u?, v? are primal and dual optimal. I.e., we’ve shown: If x? and u?, v? satisfy the KKT conditions, then x? and u?, v? are primal and dual solutions 10 Putting it together In summary, KKT conditions: • always sufficient • necessary under strong duality Putting it together: For a problem with strong duality (e.g., assume Slater’s condi- tion: convex problem and there exists x strictly satisfying non- affine inequality contraints), x? and u?, v? are primal and dual solutions ⇔ x? and u?, v? satisfy the KKT conditions (Warning, concerning the stationarity condition: for a differentiable function f , we cannot use ∂f(x) = {∇f(x)} unless f is convex) 11 What’s in a name? Older folks will know these as the KT (Kuhn-Tucker) conditions: • First appeared in publication by Kuhn and Tucker in 1951 • Later people found out that Karush had the conditions in his unpublished master’s thesis of 1939 Many people (including instructor!) use the term KKT conditions for unconstrained problems, i.e., to refer to stationarity condition Note that we could have alternatively derived the KKT conditions from studying optimality entirely via subgradients 0 ∈ ∂f(x?) + m∑ i=1 N{hi≤0}(x ?) + r∑ j=1 N{`j=0}(x ?) where recall NC(x) is the normal cone of C at x 12 Quadratic with equality constraints Consider for Q � 0, min x∈Rn 1 2 xTQx+ cTx subject to Ax = 0 E.g., as in Newton step for minx∈Rn f(x) subject to Ax = b Convex problem, no inequality constraints, so by KKT conditions: x is a solution if and only if[ Q AT A 0 ] [ x u ] = [ −c 0 ] for some u. Linear system combines stationarity, primal feasibility (complementary slackness and dual feasibility are vacuous) 13 Water-filling Example from B & V page 245: consider problem min x∈Rn − n∑ i=1 log(αi + xi) subject to x ≥ 0, 1Tx = 1 Information theory: think of log(αi + xi) as communication rate of ith channel. KKT conditions: −1/(αi + xi)− ui + v = 0, i = 1, . . . n ui · xi = 0, i = 1, . . . n, x ≥ 0, 1Tx = 1, u ≥ 0 Eliminate u: 1/(αi + xi) ≤ v, i = 1, . . . n xi(v − 1/(αi + xi)) = 0, i = 1, . . . n, x ≥ 0, 1Tx = 1 14 Can argue directly stationarity and complementary slackness imply xi = { 1/v − α if v ≤ 1/α 0 if v > 1/α

= max{0, 1/v − α}, i = 1, . . . n

Still need x to be feasible, i.e., 1Tx = 1, and this gives

n∑
i=1

max{0, 1/v − αi} = 1

Univariate equation, piecewise linear in 1/v and not hard to solve

This reduced problem is
called water-filling

(From B & V page 246)

15

Lasso

Let’s return the lasso problem: given response y ∈ Rn, predictors
A ∈ Rn×p (columns A1, . . . Ap), solve

min
x∈Rp

1

2
‖y −Ax‖2 + λ‖x‖1

KKT conditions:
AT (y −Ax) = λs

where s ∈ ∂‖x‖1, i.e.,

si ∈



{1} if xi > 0
{−1} if xi < 0 [−1, 1] if xi = 0 Now we read off important fact: if |ATi (y −Ax)| < λ, then xi = 0 ... we’ll return to this problem shortly 16 Group lasso Suppose predictors A = [A(1) A(2) . . . A(G)], split up into groups, with each A(i) ∈ Rn×p(i) . If we want to select entire groups rather than individual predictors, then we solve the group lasso problem: min x=(x(1),...x(G))∈Rp 1 2 ‖y −Ax‖2 + λ G∑ i=1 √ p(i)‖x(i)‖2 (From Yuan and Lin (2006), “Model selection and estimation in regression with grouped variables”) 17 KKT conditions: AT(i)(y −Ax) = λ √ p(i)s(i), i = 1, . . . G where each s(i) ∈ ∂‖x(i)‖2, i.e., s(i) ∈ { {x(i)/‖x(i)‖2} if x(i) 6= 0 {z ∈ Rp(i) : ‖z‖2 ≤ 1} if x(i) = 0 , i = 1, . . . G Hence if ∥∥AT (i) (y −Ax) ∥∥ 2 < λ √ p(i), then x(i) = 0. On the other hand, if x(i) 6= 0, then x(i) = ( AT(i)A(i) + λ √ p(i) ‖x(i)‖2 I )−1 AT(i)r−(i), where r−(i) = y − ∑ j 6=i A(j)x(j) 18 Constrained and Lagrange forms Often in statistics and machine learning we’ll switch back and forth between constrained form, where t ∈ R is a tuning parameter, min x∈Rn f(x) subject to h(x) ≤ t (C) and Lagrange form, where λ ≥ 0 is a tuning parameter, min x∈Rn f(x) + λ · h(x) (L) and claim these are equivalent. Is this true (assuming convex f, h)? (C) to (L): if problem (C) is strictly feasible, then strong duality holds, and there exists some λ ≥ 0 (dual solution) such that any solution x? in (C) minimizes f(x) + λ · (f(x)− t) so x? is also a solution in (L) 19 (L) to (C): if x? is a solution in (L), then the KKT conditions for (C) are satisfied by taking t = h(x?), so x? is a solution in (C) Conclusion:⋃ λ≥0 {solutions in (L)} ⊆ ⋃ t {solutions in (C)} ⋃ λ≥0 {solutions in (L)} ⊇ ⋃ t such that (C) is strictly feasible {solutions in (C)} Strictly speaking this is not a perfect equivalence (albeit minor nonequivalence). Note: when the only value of t that leads to a feasible but not strictly feasible constraint set is t = 0, i.e., {x : g(x) ≤ t} 6= ∅, {x : g(x) = t} = ∅ ⇒ t = 0 (e.g., this is true if g is a norm) then we do get perfect equivalence 20 Uniqueness in 1-norm penalized problems Using the KKT conditions and simple probability arguments, we can produce the following (perhaps surprising) result: Theorem: Let f be differentiable and strictly convex, A ∈ Rn×p, λ > 0. Consider

min
x∈Rp

f(Ax) + λ‖x‖1

If the entries of A are drawn from a continuous probability dis-
tribution (on Rnp), then with probability 1 the solution x? ∈ Rp
is unique and has at most min{n, p} nonzero components

Remark: here f must be strictly convex, but no restrictions on the
dimensions of A (we could have p� n)

Proof: the KKT conditions are

−AT∇f(Ax) = λs, si ∈

{
{sign(xi)} if xi 6= 0
[−1, 1] if xi = 0

, i = 1, . . . n

21

Note that Ax, s are unique. Define S = {j : |ATj ∇f(Ax)| = λ},
also unique, and note that any solution satisfies xi = 0 for all i /∈ S

First assume that rank(AS) < |S| (here A ∈ Rn×|S|, submatrix of A corresponding to columns in S). Then for some i ∈ S, Ai = ∑ j∈S\{i} cjAj for constants cj ∈ R, hence siAi = ∑ j∈S\{i} (sisjcj) · (sjAj) Taking an inner product with −∇f(Ax), λ = ∑ j∈S\{i} (sisjcj)λ, i.e., ∑ j∈S\{i} sisjcj = 1 22 In other words, we’ve proved that rank(AS) < |S| implies siAi is in the affine span of sjAj , j ∈ S \ {i} (subspace of dimension < n) We say that the matrix A has columns in general position if any affine subspace L of dimension k < n does not contain more than k + 1 elements; of {±A1, . . .±Ap} (excluding antipodal pairs) It is straightforward to show that, if the entries of A have a density over Rnp, then A is in general position with probability 1 ● ● ● ● ● ● ● A1 A2 A3 A4 23 Therefore, if entries of A are drawn from continuous probability distribution, any solution must satisfy rank(AS) = |S| Recalling the KKT conditions, this means the number of nonzero components in any solution is ≤ |S| ≤ min{n, p} Furthermore, we can reduce our optimization problem (by partially solving) to min xS∈R|S| f(ASxS) + λ‖xS‖1 Finally, strict convexity implies uniqueness of the solution in this problem, and hence in our original problem 24 Back to duality One of the most important uses of duality is that, under strong duality, we can characterize primal solutions from dual solutions Recall that under strong duality, the KKT conditions are necessary for optimality. Given dual solutions u?, v?, any primal solution x? satisfies the stationarity condition 0 ∈ ∂f(x?) + m∑ i=1 u?i ∂hi(x ?) + r∑ j=1 v?i ∂`j(x ?) In other words, x? achieves the minimum in minx∈Rn L(x, u ?, v?) • Generally, this reveals a characterization of primal solutions • In particular, if this is satisfied uniquely (i.e., above problem has a unique minimizer), then the corresponding point must be the primal solution 25 References • S. Boyd and L. Vandenberghe (2004), Convex Optimization, Cambridge University Press, Chapter 5 • R. T. Rockafellar (1970), Convex Analysis, Princeton University Press, Chapters 28–30 26