NUMERICAL OPTIMISATION – ASSIGNMENT 4
MARTA BETCKE KIKO RUL·LAN
EXERCISE 1
Consider the linear system Ax = b with A ∈ Rn×n, x ∈ Rn, b ∈ Rn.
(a) Implement the linear preconditioned Conjugate Gradient method.
Submit your implementation via Cody Coursework.
Consider the starting point x0 = (0, . . . , 0)T , a tolerance tol = 1e-12 and a dimension n = 100. Let b
be the vector defined by Ax∗ = b for the following value for x∗:
xtrue = zeros(n,1); xtrue(floor(n/4):floor(n/3)) = 1; xtrue(floor(n/3)+1:floor(n/2)) = -2; xtrue(floor(n/2)+1:floor(3/4*n)) = 0.5;
(b) Solve the given linear system for the following A matrices:
• A1 = diag(1:n);
• A2 = diag([ones(n-1, 1) 100]);
• 1d negative Laplacian:
A3 = -diag(ones(n-1, 1), -1) – diag(ones(n-1, 1), 1) + diag(2*ones(n, 1));
Compare the theoretical and actual convergence rates according to the distribution of eigenvalues of Ai , i = 1, 2, 3. Use the true solution1 to evaluate the convergence rate.
Submit solution via TurnitIn.
EXERCISE 2
- (a) Implement the Fletcher-Reeves conjugate gradient method. Submit your implementation via Cody Coursework.
- (b) Implement the Polak-Ribi`ere conjugate gradient method. Submit your implementation via Cody Coursework.
- (c) Minimise the function
f(x,y)=x2 +5×4 +10y2
from the initial points x0 = (1.2, 1.2)T and x0 = (−1, −1)T with a tolerance tol = 1e-12 using your implementation of the non-linear conjugate gradient methods. Explain your results highlight- ing any relevant feature in the minimisation process regarding the convergence of the methods. How would you modify them to ensure convergence?
Submit solution via TurnitIn. [20pt]
Remark. The submission to TurnitIn should not be longer than 6 pages. Avoid submitting more
code than needed (if any) and focus on explaining your results.
[20pt]
[20pt]
[20pt]
[20pt]
1In practice, the true solution is not available, so a common practice is to consider the norm of the residuals.