ECE599/CS519 Nonlinear Optimization (Spring 2021) Homework 1. Preliminaries (Due: 11:59pm, Monday, April 12)
Instruction: Students should provide enough detail of the logical procedure of deriving answers. Answers without sufficient justification will receive partial or no credit.
Reading: Before working on the homework questions, read Appendices A and B of the textbook (Lu- enberger and Ye).
1. Consider a subspace V of Rn with dimension K. Let {b1, b2, . . . , bK } denote a basis of V, i.e., b1,b2,…, bK are linearly independent, and they span V. Let Φ [b1 b2 ··· bK] ∈ Rn×K. Then, any element a of V can be written as a = Φc for some c ∈ RK.
Now, let x be an arbitrary vector in Rn. The orthogonal projection of x onto V, denoted by xV, is an element of V that is closest to x (why? think of the geometry of orthogonal projection). In other words, the projection is a solution of the following problem:
min ∥x − y∥2 (1) y∈V
(a) Verify that xV = Φ(ΦT Φ)−1ΦT x.
(Hint: Consider a minimization problem minc∈RK ∥x − Φc∥2. An optimal solution of such an un-
constrained problem satisfies the first-order condition, which is that the derivative of the objective function is zero at the optimal solution.)
(b) Prove that x − xV is orthogonal to any vector in V; in other words, show that vT (x − xV) = 0 for all v ∈ V.
2. For each of the following sets, determine whether the set is open, close, and compact (provide rationale behind your answers). In addition, identify the closure, interior, and boundary of the set.
(a) S={x∈Rn : ∥x∥=1}
(b) S={x∈Rn : aTx≥b}wherea∈Rn (a̸=0)andb∈R.
(c) S={x∈Rn : aTx=b}wherea∈Rn (a̸=0)andb∈R.
(d) S={x∈Rn : |xi|<1, i=1,...,n}wherexi isthei-throwofx.
1
3. Limit Point Consider a sequence {xk} with xk ∈ Rn for all k. Suppose that y is a limit point of {xk}, i.e., there exists a subsequence of {xk} that converges to y. Prove that given any ε > 0 and any positive integer M, there exist infinitely many j’s satisfying (i) j ≥ M, and (ii) ∥xj − y∥ < ε.
(Looking ahead: most algorithms for solving nonlinear optimization are iterative algorithms that pro- vide a sequence of solutions {xk} wherein xk denotes the solution at the end of the k-th iteration. A typical convergence property of these algorithms is that any limit point of {xk} is a local optimum. Therefore, understanding the concept of limit point is important for this course.)
4. Recall the matrix norm induced by Euclidean norm: for A ∈ Rm×n,
∥A∥ := max ∥Ax∥. (2)
x∈Rn :∥x∥≤1
(a) Prove that ∥Ax∥ ≤ ∥A∥ · ∥x∥.
(b) Suppose that A is an n × n symmetric matrix. Let λ1 ≥ λ2 ≥ ... ≥ λn denote the n eigenvalues
of A. Prove that
∥A∥ = max ∥diag(λ1, . . . , λn) · y∥ (3) y∈Rn :∥y∥≤1
where diag(λ1, . . . , λn) denotes the n by n diagonal matrix with its diagonal entries being λ1, λ2, . . . , λn.
√
(Hint: ∥Ax∥ =
xT AT Ax. Use an eigendecomposition of A.)
(c) Again, suppose that A is an n by n diagonal matrix with eigenvalues λ1 ≥ λ2 ≥ ... ≥ λn. Prove that ∥A∥ = max{|λ1|, |λn|}.
(Hint: you can use the result you proved in part (b))
5. Consider the following minimization problem:
min(x−a)TA(x−a)+b subjectto ∥x−a∥≤ε
x∈Rn
where a is a constant vector in Rn, b is a real scalar, and ε is a small positive real number.
(a) Suppose that A is positive semidefinite. What would be a solution of this minimization?
(b) Suppose that A is not positive semidefinite. Is a an optimal solution of this minimization? Provide
a justification for your answer. (Hint: use the eigen-decomposition of A)
2