程序代写代做代考 NUMERICAL OPTIMISATION – ASSIGNMENT 4

NUMERICAL OPTIMISATION – ASSIGNMENT 4

MARTA BETCKE

KIKO RUL·LAN

EXERCISE 1

Consider the linear system Ax = b with A ∈ Rn×n, x ∈ Rn, b ∈ Rn.

(a) Implement the linear preconditioned Conjugate Gradient method.
Submit your implementation via Cody Coursework. [20pt]

Consider the starting point x0 = (0, . . . , 0)
T , a tolerance tol = 1e-12 and a dimension n = 100. Let b

be the vector defined by Ax∗ = b for the following value for x∗:

xtrue = zeros(n,1);

xtrue(floor(n/4):floor(n/3)) = 1;

xtrue(floor(n/3)+1:floor(n/2)) = -2;

xtrue(floor(n/2)+1:floor(3/4*n)) = 0.5;

(b) Solve the given linear system for the following A matrices:

• A1 = diag(1:n);

• A2 = diag([ones(n-1, 1) 100]);

• 1d negative Laplacian:
A3 = -diag(ones(n-1, 1), -1) – diag(ones(n-1, 1), 1) + diag(2*ones(n, 1));

Compare the theoretical and actual convergence rates according to the distribution of eigenvalues
of Ai, i = 1, 2, 3. Use the true solution

1 to evaluate the convergence rate.
Submit solution via TurnitIn. [20pt]

EXERCISE 2

(a) Implement the Fletcher-Reeves conjugate gradient method.
Submit your implementation via Cody Coursework. [20pt]

(b) Implement the Polak-Ribière conjugate gradient method.
Submit your implementation via Cody Coursework. [20pt]

(c) Minimise the function
f(x, y) = x2 + 5×4 + 10y2

from the initial points x0 = (1.2, 1.2)
T and x0 = (−1,−1)T with a tolerance tol = 1e-12 using

your implementation of the non-linear conjugate gradient methods. Explain your results highlight-
ing any relevant feature in the minimisation process regarding the convergence of the methods.
How would you modify them to ensure convergence?
Submit solution via TurnitIn. [20pt]

Remark. The submission to TurnitIn should not be longer than 6 pages. Avoid submitting more
code than needed (if any) and focus on explaining your results.

1In practice, the true solution is not available, so a common practice is to consider the norm of the residuals.