代写 algorithm matlab ECE599/CS519 Nonlinear Optimization (Spring 2019)

ECE599/CS519 Nonlinear Optimization (Spring 2019)
Homework 5. Newton’s Method/ Conjugate Gradient/ Constrained Optimization (Due: 11:59pm on June 7, Friday)
Instruction: Students should provide enough detail of the logical procedure of deriving answers. Answers without sufficient justification will receive partial or no credit.
Reading: Section 8.8, Chapter 9, and Chapter 11 of the textbook (Luenberger and Ye).
1. Exercise 10 in Chapter 9 of the textbook. What does this result imply regarding convergence rate of a conjugate gradient method (in comparison with convergence rate of Gradient Descent)? You are allowed to use the inequality in the textbook hint without proving it.
2. Exercise 4 in Chapter 11 of the textbook.
3. (MATLAB experiment) One of the best ways to gain understanding of how an optimization method actually performs is to implement the method and apply it to solve actual problems. In that way, you will be able to observe global and local convergence behaviors and see how the choice of algorithm parameters affects the performance. In this question, you will be asked to implement conjugate gradient methods and a modification of Newton’s method, and interpret and connect the results to theoretical analysis discussed in lectures. For this question, please attach your MATLAB codes to the solution.
Consider the following nonlinear optimization problem (we considered the same optimization problem in HW4 MATLAB questions):
min f(x,y)=x2 −5xy+y4 −25x−8y. (1) x=(x,y)∈R2
(a) Implement Fletcher-Reeves method to solve this problem. You can review Lecture 16 and Section 9.6 of Textbook to understand Fletcher-Reeves method. Fletcher-Reeves method is a conjugate gradient method for solving a non-quadratic problem. For the line search step, use the Armijo’s rule (with parameters η > 1 and ε ∈ (0, 1/2)). You can use the following termination criterion:
∥∇f(xk)∥ ≤ τ with τ ≈ 10−6. You can set the initial point as
x0 = [0 0]T .
Explain the parameters of the algorithm you used. Plot (i) ∥∇f (xk )∥ versus k and (ii) f (xk ) versus k, and provide interpretation. Check the gradient and the Hessian at the algorithm output to verify whether the algorithm output is indeed a relative minimum point.
1

(b) Similar to part (a), implement Polak-Ribiere method to find a local minimum point of this prob- lem. Use the same termination criterion and initial point as in part (a). Review Lecture 16 and Section 9.6 to understand this method. Note that Polak-Ribiere method is the same as Fletcher- Reevese except the step of calculating the new direction dk+1. Plot (i) ∥∇f(xk)∥ versus k and (ii) f(xk) versus k, and provide interpretation. Check the gradient and the Hessian at the algorithm output to verify whether the algorithm output is indeed a relative minimum point.
(c) In Lecture 14, we discussed the modification of Newton’s method to ensure its global convergence as well as the order-two local convergence. Review Lecture 14 and Section 8.8 to understand this modification. Implement this modification of Newton’s method to find a local minimum point of the above problem. Use the same termination criterion and initial point as in part (a). Explain the parameters of the algorithm you used. Plot (i) ∥∇f(xk)∥ versus k and (ii) f(xk) versus k, and provide interpretation. Check the gradient and the Hessian at the algorithm output to verify whether the algorithm output is indeed a relative minimum point.
(d) Compare the plots you obtained in parts (a), (b), and (c) and the same set of plots you obtained from Gradient Descent with Armijo’s rule in HW4. Compare the convergence behaviors of these methods.
2