CS计算机代考程序代写 EE 227A: Convex Optimization and Applications February 7, 2012 Lecture 7: Weak Duality

EE 227A: Convex Optimization and Applications February 7, 2012 Lecture 7: Weak Duality
Lecturer: Laurent El Ghaoui
7.1 Lagrange Dual problem 7.1.1 Primal problem
In this section, we consider a possibly non-convex optimization problem
p∗ := min f0(x) : fi(x) ≤ 0, i = 1,··· ,m, (7.1)
x
where the functions f0, f1, . . . , fm We denote by D the domain of the problem (which is the intersection of the domains of all the functions involved), and by X ⊆ D its feasible set.
We will refer to the above as the primal problem, and to the decision variable x in that problem, as the primal variable. One purpose of Lagrange duality is to find a lower bound on a minimization problem (or an upper bounds for a maximization problem). Later, we will use duality tools to derive optimality conditions for convex problems.
7.1.2 Dual problem
Lagrangian. To the problem we associate the Lagrangian L : Rn × Rm → R, with values
m
L(x, λ) := f0(x) + 􏰊 λifi(x)
i=1
The variables λ ∈ Rm, are called Lagrange multipliers.
We observe that, for every feasible x ∈ X, and every λ ≥ 0, f0(x) is bounded below by
L(x, λ):
The Lagrangian can be used to express the primal problem (7.1) as an unconstrained one.
∀x∈X, ∀λ∈Rm+ : f0(x)≥L(x,λ). (7.2)
Precisely:
where we have used the fact that, for any vectors f ∈ Rm, we have
p∗ = min max L(x, λ), x λ≥0,ν
(7.3)
T 􏰆 0 if f ≤ 0 max λ f =
λ≥0 +∞ otherwise.
7-1

EE 227A Lecture 7 — February 7, 2012 Spring 2012
Lagrange dual function. We then define the Lagrange dual function (dual function for short) the function
g(λ) := min L(x, λ). x
Note that, since g is the pointwise minimum of affine functions (L(x, ·) is affine for every x), it is concave. Note also that it may take the value −∞.
From the bound (7.2), by minimizing over x in the right-hand side, we obtain ∀x∈X, ∀λ≥0 : f0(x)≥min L(x′,λ,)=g(λ),
x′
which, after minimizing over x the left-hand side, leads to the lower bound
∀ λ ∈ R m+ , ν ∈ R p : p ∗ ≥ g ( λ ) .
Lagrange dual problem. The best lower bound that we can obtain using the above
bound is p∗ ≥ d∗, where
d∗ = max g(λ). λ≥0, ν
We refer to the above problem as the dual problem, and to the vector λ ∈ Rm as the dual variable. The dual problem involves the maximization of a concave function under convex (sign) constraints, so it is a convex problem. The dual problem always contains the implicit constraint λ ∈ dom g.
We have obtained:
Theorem 1 (Weak duality). For the general (possibly non-convex) problem (7.1),weak duality holds: p∗ ≥ d∗.
Case with equality constraints. If equality constraints are present in the problem, we can represent them as two inequalities. It turns out that this leads to the same dual, as if we would directly use a single dual variable for each equality constraint, which is not restricted in sign. To see this, consider the problem
p∗ :=minf0(x) : fi(x)≤0, i=1,···,m,
x
hi(x) = 0, i = 1,··· ,p. p∗ :=minf0(x) : fi(x)≤0, i=1,···,m,
We write the problem as
x
hi(x) ≤ 0, −hi(x) ≤ 0, i = 1,··· ,p.
Using a mulitplier νi± for the constraint ±hi(x) ≤ 0, we write the associated Lagrangian as mpp
f0(x) + 􏰊 λifi(x) + 􏰊 νi+hi(x) + 􏰊 νi−(−hi(x))
L(x, λ, ν+, ν−) =
= f0(x) + 􏰊 λifi(x) + 􏰊 νihi(x),
i=1 i=1 i=1 mp
i=1
i=1
7-2

EE 227A Lecture 7 — February 7, 2012 Spring 2012
where ν := ν+ − ν− does not have any sign constraints.
Thus, inequality constraints in the original problem are associated with sign constraints
on the corresponding multipliers, while the multipliers for the equality constraints are not explicitly constrained.
7.1.3 Minimax inequality
Weak duality can also be obtained as a consequence of the following minimax inequality, which is valid for any function φ of two vector variables x,y, and any subsets X, Y:
To prove this, start from
max min φ(x, y) ≤ min max φ(x, y). y∈Y x∈X x∈X y∈Y
∀x,y : min φ(x′,y)≤max φ(x,y′). x′∈X y′∈Y
(7.4)
and take the minimum over x ∈ X on the right-hand side, then the maximum over y ∈ Y on the left-hand side.
Weak duality is indeed a direct consequence of the above. To see this, start from the unconstrained formulation (7.3), and apply the above inequality with φ = L the Lagrangian of the original problem, and y = λ the vector of Lagrange multipliers.
Interpretation as a game. We can interpret the minimax inequality result in the context of a one-shot, zero-sum game. Assume that you have two players A and B, where A controls the decision variable x, while B controls y. We assume that both players have full knowledge of the other player’s decision, once it is made. The player A seeks to minimize a payoff (to player B) L(x, y), while B seeks to maximize that payoff. The right-hand side in (7.4) is the optimal pay-off if the first player is required to play first. Obviously, the first player can do better by playing second, since then he or she knows the opponent’s choice and can adapt to it.
7.2 Examples
7.2.1 Linear optimization problem
Inequality form. Consider the LP in standard inequality form p∗ = min cT x : Ax ≤ b,
x
where A ∈ Rm×n, b ∈ Rm, and the inequality in the constraint Ax ≤ b is interpreted component-wise.
7-3

EE 227A Lecture 7 — February 7, 2012 Spring 2012
The Lagrange function is
L(x, λ) = cT x + λT (Ax − b) and the corresponding dual function is
g(λ) = min L(x, λ) = The dual problem reads
􏰆−bTλ ifATλ+c=0 x +∞ otherwise.
d∗ =max g(λ)=max −bTλ : λ≥0, ATλ+c=0. λλ
The dual problem is an LP in standard (sign-constrained) form, just as the primal problem was an LP in standard (inequality) form.
Weak duality implies that
cT x + bT λ ≥ 0
for every x,λ such that Ax ≤ b, ATλ = −c. This property can be proven directly, by replacing c by −AT λ in the left-hand side of the above inequality, and exploiting Ax ≤ b and λ ≥ 0.
Standard form. We can also consider an LP in standard form: p∗ =min cTx : Ax=b, x≥0.
x
The equality constraints are associated with a dual variable ν that is not constrained in the dual problem.
The Lagrange function is
L(x, λ, ν) = cT x − λT x + νT (b − Ax)
and the corresponding dual function is g(λ)=min L(x,λ,ν)=
The dual problem reads
􏰆 bTν ifc=ATν+λ x +∞ otherwise.
d∗ = max g(λ,ν)=max bTν : c≥ATν. λ≥0,ν ν
This is an LP in inequality form.
7-4

EE 227A Lecture 7 — February 7, 2012 Spring 2012
7.2.2 Minimum Euclidean distance problem
Consider the problem of minimizing the Euclidean distance to a given affine space:
min 12∥x∥2 : Ax = b, (7.5)
where A ∈ Rp×n, b ∈ Rp. We assume that A is full row rank, or equivalently, AAT ≻ 0. The Lagrangian is
L(x,ν)= 21∥x∥2 +νT(Ax−b), and the Lagrange dual function is
g(ν)=min L(x,ν)=min 1∥x∥2 +νT(Ax−b). xx2
In this example, the dual function can be computed analytically, using the optimality con- dition ∇xL(x, ν) = x + AT ν = 0. We obtain x = −AT ν, and
g(ν) = −12νT AAT ν − bT ν. The dual problem expresses as
d∗ = max g(ν) = max −1νTAATν −bTν. νν2
The dual problem can also be solved analytically, since it is unconstrained (the domain of g is the entire space Rp). We obtain ν∗ = −(AAT )−1b, and
d∗ = 12bT (AAT )−1b. We have thus obtained the bound p∗ ≥ d∗.
7.2.3 A non-convex boolean problem
For a given matrix W = W T ≻ 0, we consider the problem
p∗ = max xT W x : x2i ≤ 1, i = 1, . . . , n.
x
Lagrange relaxation. In this maximization problem, Lagrange duality will provide an upper bound on the problem. This is called a “relaxation”, as we go above the true maximum, as if we’d relax (ignore) constraints.
7-5

EE 227A Lecture 7 — February 7, 2012
Spring 2012
Lagrange dual. The Lagrangian writes
n L(x,λ)=xTWx+􏰊λi(1−x2i)=TrDλ +xT(W −Dλ)x.
i=1
where Dλ := diag(λ).
To find the dual function, we need to maximize the Lagrangian with respect to the primal
variable x. We express this problem as
g(λ)=max L(x,λ)=min t : ∀x, t≥TrDλ +xT(W −Dλ)x.
xt
The last inequality holds if and only if
􏰄Dλ−W 0 􏰅
≽ 0.
Hence the dual function is the optimal value of an SDP in one variable:
g(λ)=mint : t
We can solve this problem explicitly: g(λ) =
􏰄Dλ−W 0 􏰅 0 t−TrDλ
􏰆TrDλ ifDλ≽W −∞ otherwise.
≽0.
0 t − Tr D
λ
The dual problem involves minimizing (that is, getting the best upper bound) the dual function over the variable λ ≥ 0:
d∗ = min λT 1 : diag(λ) ≽ W. λ
The above is an SDP, in variable λ. Note that λ > 0 is automatically enforced by the PSD constraint.
Geometric interpretation. The Lagrange relaxation of the primal problem can be in- terpreted geometrically, as follows. For t > 0, λ > 0, consider the ellipsoids
Et =􏰂x : xTWx≤t􏰃, Eλ =􏰂x : xTDλx≤TrDλ􏰃.
The primal problem amounts to find the smallest t ≥ 0 for which the ellipsoid Et contains theballB∞ :={x : ∥x∥∞ ≤1}. Notethatforeveryλ>0,Eλ containstheballB∞. To find an upper bound on the problem, we can find the smallest t for which there exist λ > 0 such that Et ⊇ Eλ. The latter condition is precisely diag(λ) ≽ W, t ≥ TrDλ.
7-6

EE 227A Lecture 7 — February 7, 2012 Spring 2012
3
2
1
0
−1
−2
−3
−3 −2 −1 0 1 2 3
Figure 7.1. Geometric interpretation of dual problem in the boolean quadratic problem. In 2D the relax- ation turns out to be exact.
7.3 More on non-convex quadratic optimization
The Boolean problem examined previously is part of a general class of non-convex quadratic problems of the form
p∗ := max q0(x) : qi(x) ≤ 0, i = 1,…,m, (7.6) x
where x ∈ Rn is the decision variable, and qi’s are quadratic functions, of the form qi(x):=xTQix+2qiTx+pi, i=1,…,m,
Lagrange relaxation.
have
The idea is that if, for a given m-vector λ ≥ 0, and scalar t, we
m
∀x : q0(x)≤􏰊λiqi(x)+t,
i=1
then for every x that is feasible for (7.6), the sum in the above is non-positive. Hence, q0(x) ≥ t, so that t is an upper bound on our problem. The condition above is easy to check, as it involves a single quadratic function: indeed, it is equivalent to the LMI in (t, λ):
􏰄Q0 q0􏰅 􏰊m 􏰄Qi qi􏰅 􏰄00􏰅 qT r ≼ λi qT r + 0 t .
00 i=1 ii
Hence, the best upper bound that we can achieve using this approach is the SDP
mint : (7.7), λ≥0. t,λ
(7.7)
7-7

EE 227A Lecture 7 — February 7, 2012 Spring 2012
The S-lemma. This mysterious name corresponds to a special case of non-convex quadratic optimization, where there is only a single constraint. (Refer to appendix B of [BV] for more details.) The problem bears the form
max q0(x) : q1(x) ≤ 0, x
where both q0,q1 are arbitrary quadratic functions. The S-lemma states that if there exist a point x ∈ Rn such that q1(x) < 0, then the Lagrange relaxation is exact. The latter has the form of an SDP: 􏰄 Q0 q0 􏰅 min t : T ≼ λ 􏰄 Q1 q1 􏰅 T + 􏰄 0 0 􏰅 0 t , λ ≥ 0. (7.8) t,λ q0 r0 q1 r1 This shows in particular that the apparently non-convex problem of finding the direction of maximal variance for a given covariance matrix Σ is actually convex. Lagrange relaxation for the problem yields the dual problem (check this!) max xT Σx x : ∥x∥2=1 mint : tI≽Σ. t From the S-lemma, the bound is exact. The S-lemma has many other applications. Exercises Figure 7.2. Localization problem with three range measurements in two dimensions. 7-8 EE 227A Lecture 7 — February 7, 2012 Spring 2012 1. Anchor localization. We are given anchor positions xi ∈ R3, and associated distances from these anchor points to an unknown object, Ri, i = 1, . . . , m. The problem is to estimate a position of the object, and associated measure of uncertainty around the estimated point. Geometrically, the measurements imply that the object is located at the intersection of the m spheres of centers xi and radiuses Ri, i = 1, . . . , m. The main problem is to provide one point in the intersection located at some kind of “center”, and also a measure of the size of the intersection. In this problem, we seek to compute an outer spherical approximation to the inter- section, that is, a sphere of center x0 and radius R0, of minimal volume, such that it contains the intersection. (a) First show how to find a point in the intersection, or determine it is empty, via SOCP. To simplify, and without loss of generality, we assume from now on that 0 is inside the intersection. This means that the vector z with components zi := Ri2 − xTi xi, i = 1, . . . , m, is non-negative componentwise. (b) Afirstapproach,whichworkswellonlyinmoderatedimensions(2Dor3D),simply entails gridding the boundary of the intersection. In 2D, we can parametrize the boundary explicitly, as a curve, using an angular parameter. For each angular direction θ ∈ [0, 2π], we can easily find the point that is located on the boundary of the intersection, in the direction given by θ: we simply maximize t such that the point (tcosθ,tsinθ) is inside everyone of the spheres. (There is an explicit formula for the maximal value.) One we have computed N points on the boundary, x(k), k = 1,...,N, we simply solve the SOCP min R0 : R0 ≥∥x0 −x(k)∥2, k=1,...,N. x0 ,R0 Compare the results with a uniform gridding of 13 and 63 points. Use the data 􏰄 −0.46 0.19 0.48 􏰅 X = (x1, x2, x3) = 0.03 0.54 −0.49 , RT = (R1,R2,R3) = 􏰀 0.5 0.6 0.85 􏰁. (c) Show that the optimal (minimum-volume) spherical approximation can be ob- tained by solving R02 =min max 􏰂∥x−x0∥2 : ∥x−xi∥2 ≤Ri2, i=1,...,m􏰃. x0 x (d) Using Lagrange relaxation, show that an upper bound on the optimal radius can be obtained as m􏰇m􏰈 minF(x0,y), withF(x0,y):=􏰊yiRi2+max ∥x−x0∥2−􏰊yi∥x−xi∥2 . x0,y x i=1 i=1 7-9 EE 227A Lecture 7 — February 7, 2012 Spring 2012 (e) Show that F(x ,y)= 0  T T ∥Xy−x0∥2 􏰉m x0x0+yz+􏰉m yi−1 if i=1yi>1,
T T
i=1
􏰉m
if i=1 yi = 1, x0 = Xy,
otherwise.
x0 x0 + y z  +∞
(f) Solve the problem via CVX, and compare your approximation with the gridding approach.
2. Reachable sets for discrete-time dynamical systems. Consider the discrete-time linear system
x(t+1)=Ax(t)+Bp(t), t=0,1,2,…
where A ∈ Rn×n and B ∈ Rn×np . We assume that the initial condition x(0) is zero, while the signal p is considered to be noise, and is only known to be norm-bounded, precisely ∥p(t)∥2 ≤ 1 for ever t ≥ 0. The goal of reachability analysis is to come up with bounds on the state at a certain time T.
We seek a minimum-volume sphere S that is guaranteed to contain x(T ), irrespective of the values of the perturbation signal p(t) within its bounds. By applying the recursion, we can express x(T ) as a linear combination of p(0), . . . , p(T − 1):
T−1
x(T) = 􏰊AtBp(t) = Lp,
t=0
where p = (p(0), . . . , p(T − 1)), and L := [L(0), . . . , L(T − 1)], with L(t) := AtB.
(a) Show that a sufficient condition for S to contain the state vector at time T is
T−1
∃λ≥0 : ∀p=(p(0),…,p(T −1)), pTLTLp≤R02 +􏰊λ(t)(p(t)Tp(t)−1).
t=0
(b) Show how to compute the best approximation based on the condition above, via SDP.
7-10