计算机代写

Stewart School of Industrial and Systems Engineering Georgia Institute of Technology
October 26, 2022
Lecture 16: Nonlinear Optimization

Copyright By PowCoder代写 加微信 powcoder

ISyE 6673: Financial Optimization

Nonlinear Optimization
• Farkas’ lemma underpins fundamental theorems in asset pricing • Markets that obey the law of one price have linear pricing rules • Complete markets have a unique pricing rule
• Arbitrage-free markets have positive pricing rules
Beyond linear optimization, towards optimization under uncertainty.
Dr. ISyE 6673: Financial Optimization October 26, 2022 2 49

Nonlinear Optimization From linear optimization to nonlinear optimization
Dr. ISyE 6673: Financial Optimization October 26, 2022 3 49

Nonlinear Optimization From linear optimization to nonlinear optimization
Dr. ISyE 6673: Financial Optimization October 26, 2022 4 49

Nonlinear Optimization From linear optimization to nonlinear optimization
Dr. ISyE 6673: Financial Optimization October 26, 2022 5 49

Nonlinear Optimization From linear optimization to nonlinear optimization
Dr. ISyE 6673: Financial Optimization October 26, 2022 6 49

Nonlinear Optimization From linear optimization to nonlinear optimization
Dr. ISyE 6673: Financial Optimization October 26, 2022 7 49

Nonlinear Optimization From linear optimization to nonlinear optimization
Dr. ISyE 6673: Financial Optimization October 26, 2022 8 49

Nonlinear Optimization From linear optimization to nonlinear optimization
Dr. ISyE 6673: Financial Optimization October 26, 2022 9 49

Nonlinear Optimization From linear optimization to nonlinear optimization
Dr. ISyE 6673: Financial Optimization October 26, 2022 10 49

Nonlinear Optimization From linear optimization to nonlinear optimization
Dr. ISyE 6673: Financial Optimization October 26, 2022 11 49

Nonlinear Optimization From linear optimization to nonlinear optimization
Dr. ISyE 6673: Financial Optimization October 26, 2022 12 49

Nonlinear Optimization From linear optimization to nonlinear optimization
Dr. ISyE 6673: Financial Optimization October 26, 2022 13 49

Nonlinear Optimization From linear optimization to nonlinear optimization
Dr. ISyE 6673: Financial Optimization October 26, 2022 14 49

Nonlinear Optimization From linear optimization to nonlinear optimization
Dr. ISyE 6673: Financial Optimization October 26, 2022 15 49

Nonlinear Optimization From linear optimization to nonlinear optimization
Dr. ISyE 6673: Financial Optimization October 26, 2022 16 49

A two-dimensional LP
(P) max c1x1 + c2x2
s.t. (ai)⊺x≤b 1≤i≤m
Nonlinear Optimization
From linear optimization to nonlinear optimization
piai =c p≥0
(D) min 􏰃bipi
• x∗ primal optimal nondegenerate with active constraints a1, a2
• Equivalent to c ∈ cone(a1, a2)
(a1)⊺x ≤ b1
(a2)⊺x ≤ b2
Dr. ISyE 6673: Financial Optimization October 26, 2022 17 49

Nonlinear Optimization From linear optimization to nonlinear optimization
From LP to NLP
• Now consider a two-dimensional nonlinear optimization problem: min c(x)
s.t. fi(x)≤0 1≤i≤m
• Assume c and fi are all differentiable convex functions
• Consider an optimal solution x∗ and take the Taylor expansion of
all nonlinear functions around x∗: min c(x∗) + ∇c(x∗)⊺(x − x∗)
s.t. fi(x∗)+∇fi(x∗)⊺(x−x∗)≤0 • Equivalently:
max − ∇c(x∗)⊺x
s.t. ∇fi(x∗)⊺x ≤ ∇fi(x∗)⊺x∗ − fi(x∗)
Dr. ISyE 6673: Financial Optimization October 26, 2022 18 49

Nonlinear Optimization From linear optimization to nonlinear optimization
From LP to NLP
max − ∇c(x∗)⊺x
s.t. ∇fi(x∗)⊺x ≤ ∇fi(x∗)⊺x∗ − fi(x∗) := bi 1 ≤ i ≤ m
x∗ optimal, f1, f2 active ⇔ −∇c(x∗) ∈ cone(∇f1(x∗), ∇f2(x∗)) −∇c(x∗)
∇f2(x∗) x∗
f2(x) ≤ 0 ∇f2(x∗)⊺x ≤ b2
∇f1(x∗)⊺x ≤ b1 f1(x) ≤ 0
Dr. ISyE 6673: Financial Optimization October 26, 2022 19 49

Nonlinear Optimization From linear optimization to nonlinear optimization
Normal cone
• We can define cone(∇f1(x∗),∇f2(x∗)) a bit more generally
Definition
Given a convex set X and a point x∗ on its boundary, the normal coneofX atx∗ istheset
NX(x∗)={y∈Rn :y⊺(x−x∗)≤0∀x∈X}
• y⊺x≤y⊺x∗ ∀x∈X • y1 ∈NX(x∗)
• y2 ∈NX(x∗)
• y3 ∈NX(x∗)
• y 4 ∈/ N X ( x ∗ )
Dr. ISyE 6673: Financial Optimization October 26, 2022 20 49

Nonlinear Optimization From linear optimization to nonlinear optimization
Supporting hyperplane
• Each vector y in the normal cone defines a supporting hyperplane of the set X (generalization of a “tangent line”)
Dr. ISyE 6673: Financial Optimization October 26, 2022 21 49

Nonlinear Optimization Optimality conditions
Dr. ISyE 6673: Financial Optimization October 26, 2022 22 49

Geometric optimality conditions
Nonlinear Optimization Optimality conditions
• Convex optimization problem over set X : min c(x)
s.t. fi(x)≤0 1≤i≤m
• A point x∗ is an optimal solution if and only if x∗ is feasible and
• Equivalently
−∇c(x∗) ∈ NX (x∗) ∇c(x∗)⊺(x−x∗)≥0, ∀x∈X
Dr. ISyE 6673: Financial Optimization October 26, 2022 23 49

Nonlinear Optimization Optimality conditions
Algebraic optimality conditions
• A point x∗ is an optimal solution if and only if x∗ is feasible and −∇c(x∗) ∈ NX (x∗) = cone ({∇fi (x∗)}i active)
• We can write membership in the normal cone algebraically:
−∇c(x∗) = 􏰃 λi ∇fi (x∗)
λi ≥ 0 ∀i ∈ [m]
(linear combination)
(conic combination) (active constraints only)
λi fi (x∗) = 0 ∀i ∈ [m]
• These are called the Karush-Kuhn-Tucker (KKT) conditions
Dr. ISyE 6673: Financial Optimization October 26, 2022 24 49

Nonlinear Optimization Optimality conditions
KKT conditions
KKT conditions for general nonlinear problems
Consider a convex optimization problem (differentiable):
min c(x) (P)
s.t. fi(x)≤0 1≤i≤m
If x∗ is an optimal solution of (P) and the gradients of the active
constraints at x∗ are linearly independent, the following conditions hold: 1. Stationarity: there exist multipliers λi ≥ 0 for i ∈ [m] such that
∇c(x∗) + 􏰃λi∇fi(x∗) = 0
2. Complementary slackness: for each constraint i ∈ [m]
λifi(x∗) = 0
Dr. ISyE 6673: Financial Optimization October 26, 2022 25 49

Nonlinear Optimization Optimality conditions
Equality constraints
• Consider the more general convex optimization problem:
s.t. fi(x)≤0 s.t. hk(x)=0
• Recallhk(x)=0⇔hk(x)≤0and −hk(x)≤0
• Introduce multipliers μ+k ≥ 0, μ−k ≥ 0 and write the stationarity
conditions:
∇c(x∗) +􏰃λi∇fi(x∗) + 􏰃μ+k ∇hk(x∗)−􏰃μ−k ∇hk(x∗)=0
i=1 i=1 i=1 mm
∇c(x∗) +􏰃λi∇fi(x∗) + 􏰃(μ+k −μ−k )∇hk(x∗)=0 i=1 i=1
1≤i≤m 1≤k≤r
Dr. ISyE 6673: Financial Optimization October 26, 2022 26 49

Nonlinear Optimization Optimality conditions
KKT conditions
KKT conditions for convex problems with equality constraints
If x∗ is an optimal solution and the gradients of the active constraints at x∗ are linearly independent, the following hold:
1. Stationarity: there exist multipliers λi ≥ 0 for i ∈ [m] and μk ∈ R for k ∈ [r] such that
∇c(x∗) + 􏰃λi∇fi(x∗) + 􏰃μk∇hk(x∗) = 0
2. Complementary slackness: for each constraint i ∈ [m]
λifi(x∗) = 0
• Multipliers for equality constraints are unconstrained
• Complementary slackness for equality constraints already follows
from primal feasibility
Dr. ISyE 6673: Financial Optimization October 26, 2022 27 49

Nonlinear Optimization Optimality conditions
Are the KKT conditions ever sufficient?
• The KKT conditions are necessary and sufficient in problems where strong duality holds
• In convex optimization, strong duality doesn’t hold automatically, but holds under some regularity assumptions
Slater’s condition
There exists a feasible x such that fi(x) < 0 for all i ∈ [m]. • If Slater’s condition is satisfied, the KKT conditions are both necessary and sufficient! Dr. ISyE 6673: Financial Optimization October 26, 2022 28 49 Nonlinear Optimization Optimality conditions What about nonconvex problems? • There exist analogs to the KKT conditions for nonconvex problems, with two caveats • For nonconvex problems typically the most we can certify is a local optimal solution (not necessarily global) • If the problem becomes really ugly, the stationarity conditions can become more complex to define • We can use the KKT conditions for nonlinear nonconvex problems, but we need to be aware of their limitations! Dr. ISyE 6673: Financial Optimization October 26, 2022 29 49 Nonlinear Optimization Quadratic optimization Dr. ISyE 6673: Financial Optimization October 26, 2022 30 49 Nonlinear Optimization Quadratic optimization Quadratic programming • Convex quadratic program (QP): min 12x⊺Px + q⊺x + r s.t. Gx ≤ h Ax = b (P is positive semi-definite) • Convex quadratically constrained quadratic program (QCQP): min 12x⊺P0x + q0⊺x + r0 s.t. 12x⊺Pix+qi⊺x+ri 1≤i≤m Ax = b (all Pi are positive semi-definite) Dr. ISyE 6673: Financial Optimization October 26, 2022 31 49 Nonlinear Optimization Quadratic optimization Unconstrained quadratic optimization • Minimizing a quadratic function: v∗ =minf0(x):= 12x⊺Px+q⊺x+r • Optimality condition (KKT): ∇f0(x) = Px + q = 0 If q ∈/ span(P), then v∗ = −∞ If P is invertible, then x∗ = −P−1q is the unique optimum If P is singular but q ∈ span(P), there may be multiple optimal solutions Dr. ISyE 6673: Financial Optimization October 26, 2022 32 49 Nonlinear Optimization Quadratic optimization Least-squares regression • We are given some data points {(xi,yi)}ni=1 and want to fit a linear model • Model parameters are vector of coefficients w (assume the first feature of x is constantly 1 so no intercept is needed) • Optimization problem: min􏰃(xi⊺w−yi)2 i=1  x 1⊺   y 1  • .. ⊺−1⊺ DefineX=.,y=.;thenw=(XX) Xy Dr. ISyE 6673: Financial Optimization October 26, 2022 33 49 Nonlinear Optimization Quadratic optimization Quadratic optimization with equality constraints • Equality-constrained convex QP: min 12x⊺Px + q⊺x + r s.t. Ax = b • KKT conditions (just stationarity): Px+q+􏰃λiai =0=Px+q+A⊺λ • Combine with feasibility conditions Ax = b to get the following linear system 􏰊P A⊺􏰋 􏰊x􏰋 􏰊−q􏰋 A0λ=b Dr. ISyE 6673: Financial Optimization October 26, 2022 34 49 Nonlinear Optimization Quadratic optimization Quadratic optimization with inequality constraints • Inequality-constrained convex QP: min 12x⊺Px + q⊺x + r s.t. Ax ≤ b • Stationarity conditions: Px + q + A⊺λ = 0, λ ≥ 0 • Complementary slackness: • Nonlinear! λ i ( b i − a i⊺ x ) = 0 Dr. ISyE 6673: Financial Optimization October 26, 2022 35 49 Nonlinear Optimization Quadratic optimization Nonlinear system of equations • Wecandefineslackvariabless=b−Ax≥0 λ i ( b i − a i⊺ x ) = 0 ⇔ λ i s i = 0 • Allows to reformulate the KKT conditions as the following nonlinear system: Px+A⊺λ+q 0 Ax + s − b SΛe  = 0 0 S=. . .. , Λ= . . .. , e=.⇒SΛe= .  ... ... 1 λmsm s1 0 ... 0 s2 ... ... 0 λ2 ... 1 λ1s1  .  .  Dr. ISyE 6673: Financial Optimization October 26, 2022 36 49 Newton’s method Nonlinear Optimization Quadratic optimization Root-finding Given a differentiable function f , how to find x such that f (x ) = 0? Newton’s iterative idea 1. Start with a guess xk 2. Replace f with its first-order Taylor approximation around xk: f(xk +d)≈f(xk)+f′(xk)·d=0 3. Solve for the step d f′(xk)·d =−f(xk)⇒d =−f(xk) f′(xk) 4. Update xk+1 = xk + d and go to step 2 Dr. ISyE 6673: Financial Optimization October 26, 2022 37 49 Nonlinear Optimization Quadratic optimization Newton’s method in higher dimensions • Consider the linear system: f1(x) = 0 f2(x) = 0 f3(x) = 0 where fi : Rn → R are all differentiable • Starting from a guess xk, we can obtain xk+1 = xk +d by solving ∇f1(xk )⊺d = −f1(xk ) ∇f2(xk )⊺d = −f2(xk ) ∇f3(xk )⊺d = −f3(xk ) Dr. ISyE 6673: Financial Optimization October 26, 2022 38 49 Nonlinear Optimization Quadratic optimization Newton’s method for quadratic programming • We want to solve the system Px+A⊺λ+q 0  Ax + s − b  = 0 • Three functions on the left; vector-valued, but it doesn’t matter xk dx •kkks Startwithaguesss withs ,λ >0,findd=d such λk dλ
P 0 A I
A⊺dx −Pxk −A⊺λk −q skk
0d= −Ax −s +b  Sk dλ −SkΛke
Dr. ISyE 6673: Financial Optimization October 26, 2022 39 49

Interior-point method
Nonlinear Optimization Quadratic optimization
• The direction d is rescaled so that s and λ always remain strictly positive
• At each iteration the solution is strictly feasible
• This is called an interior-point algorithm because we only work
with interior points, not boundary points
• Results also work in linear programming; philosophically, this is the opposite of what the simplex method does
• There are many more interior-point algorithms and they are the core of convex optimization
Dr. ISyE 6673: Financial Optimization October 26, 2022 40 49

Nonlinear Optimization Cones and generalized inequalities
Dr. ISyE 6673: Financial Optimization October 26, 2022 41 49

Nonlinear Optimization Cones and generalized inequalities
A new perspective on inequalities
• Consider the vector inequality x ≥ y ⇔ xi ≥ yi ∀i • Vector inequalities define a partial ordering on Rn
Definition
The relation ⪰ is a partial ordering on Rn if it satisfies • x⪰xforallx∈Rn
• Ifx⪰yandy⪰xthenx=y
• Ifx⪰yandy⪰zthenx⪰z
• Trivial to see that linear inequalities satisfy these properties • Noticethatx≥y⇔x−y≥0⇔x−y∈Rn+
• What kind of set is Rn+?
Dr. ISyE 6673: Financial Optimization October 26, 2022 42 49

Nonlinear Optimization Cones and generalized inequalities
Convex cones, revisited
• Rn+ (non-negative orthant) is a convex cone!
Any convex cone K induces a partial ordering on Rn: x⪰K y⇔x−y∈K
Dr. ISyE 6673: Financial Optimization October 26, 2022 43 49

Useful convex cones
• Non-negative orthant Rn+
Nonlinear Optimization Cones and generalized inequalities
• Norm cones
K= t ∈Rn:∥x∥≤t
K is the epigraph of a (convex) norm function means K is convex If (x, t) ∈ K then for any a ≥ 0, ∥ax∥ = a ∥x∥ ≤ at so (ax, at) is in K (indeed a cone)
• Our favorite norm is the Euclidean norm ∥·∥2 which induces a second-order cone:
.􏰖  n+1  .  2 
L =:x+…+x2≤t 1n
 xn  
Dr. ISyE 6673: Financial Optimization October 26, 2022 44 49

Second-order cone
Nonlinear Optimization Cones and generalized inequalities
• Boundary of L3 ⊆ R3
• Also called Lorentz cone, quadratic cone, ice-cream cone
Dr. ISyE 6673: Financial Optimization October 26, 2022 45 49

Nonlinear Optimization Cones and generalized inequalities
Linear conic inequalities
Linear conic inequality (aka generalized inequality)
Given a partial ordering ⪰K induced by the cone K, we can define a linear conic inequality as
Ax⪰K b⇔Ax−b∈K
• Can be nonlinear if K is nonlinear
• Unlike ≥, this is not a component-wise inequality
Dr. ISyE 6673: Financial Optimization October 26, 2022 46 49

Nonlinear Optimization Cones and generalized inequalities
Second-order inequality example
1 2 Is it true that 2 ⪰L3 1?
31 1 2 −1
•  2  −  1  =  1  312
• 􏰕(−1)2 + 12 = √2 ≤ 2
Dr. ISyE 6673: Financial Optimization October 26, 2022 47 49

Nonlinear Optimization Cones and generalized inequalities
Second-order cone programming (SOCP)
• Second-order cone program (SOCP):
s.t. Aix ⪰Lni bi
whereA= i ∈Rni×nandb= i ∈Rni
• Equivalently:
􏰒􏰒 ̄ ̄􏰒􏰒 ⊺
s.t. 􏰒Aix−bi􏰒2 ≤cix−di ∀i ∈[m]
Dr. ISyE 6673: Financial Optimization October 26, 2022 48 49

Nonlinear Optimization Cones and generalized inequalities
• Geometric optimality conditions in LP generalize to NLP as KKT conditions
• Interior-point methods solve convex optimization problems • Next week: application to portfolio optimization
Dr. ISyE 6673: Financial Optimization October 26, 2022 49 49

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com