[Content_Types].xml
_rels/.rels
matlab/document.xml
matlab/output.xml
metadata/coreProperties.xml
metadata/mwcoreProperties.xml
metadata/mwcorePropertiesExtension.xml
metadata/mwcorePropertiesReleaseInfo.xml
Iterative Methods: Convergence In this livescript, you will learn how To determine the convergence of an iterative method. The SOR method can be used to improve the convergence/speed up the computation of the Gauss-Seidel method. Consider an iterative method \{x\}=g(\{x\}) and define the error at step k as the difference between the k^{\text{th}} term and the true solution. i.e. \{\epsilon\}^{(k)}=\{x\}^{(k)}-\{x\} NOTE : that we call a method linear if we can write it as \{\epsilon\}^{(k+1)}=[P]\{\epsilon\}^{(k)} where [P] is a matrix. Part 1: (a) The Jacobi and Gauss-Seidel methods can be expressed in matrix form as [M]\{x\}=[N]\{x\}+\{c\} See if you can determine a general matrix expression for [P] . In lectures, you would have seen that the method converges if the magnitudes of all the eigenvalues are less than unity. Therefore, we characterise the convergence of the method by the parameter \rho([P])=\max_{i}{\{|\lambda_{i}|\}}<1 which denotes the magnitude of the largest eigenvalue. Note that in the literature \rho is often called the spectral radius . Going back to our favourite example 2x-3y+5z=10 4x+7y-2z=-5 2x-4y+25z=31 Equation 1. (b) Apply your expressions from part (a) to Eq. (1) and determine the matrix [P] . (c) Determine the spectral radius of [P] in both cases and show that these methods are convergent. Successive Over-Relaxation Let's hypothese that if at iteration k , we take a weighted average of the new solution and the old solution, i.e. \{x\}^{(k+1)}=\omega \{x\}_{\text{Method}}^{(k+1)}+(1-\omega)\{x\}^{(k)} then we can improve the convergence characteristics of the problem. For now we'll consider the Gauss-Seidel method, but you can generalise this to any iterative procedure. Part 2: (a) Let [A]=[D]+[L]+[U] , where [D] is the matrix of the diagonals, [L] is a strictly lower triangular, and [U] is strictly upper triangular. Apply the Gauss-Seidel method and show that \{x\}^{(k+1)}_{\text{Method}}=D^{-1}(\{c\}-[L]\{x\}^{(k+1)}-[U]\{x\}^{(m)}) (b) Apply successive over-relaxation and show that ([I]+\omega[D]^{-1}[L])\{x\}^{(k+1)}=\omega[D]^{-1}\{c\}+((1-\omega)[I]-\omega[D]^{-1}[U])x^{(k)} (c) Find the matrix [P] for the SOR method. (d) The following code implements the SOR method for Eq. (1), with \omega=1 . Run the code and verify that this is equivalent to the Gauss-Seidel method. A = [2 -3 5;4 7 -2;2 -4 25];
c = [10;-5;31];
D = diag(diag(A))
U = triu(A)-D
L = A-D-U
w = 1;
x = [1;0;0];
i = 1;
tol = 10^-6;
max_iter = 5000;
fprintf(' x1 x2 x3 \n')
fprintf('%8.3f %8.3f %8.3f\n',x(1),x(2),x(3))
while norm(A*x-c,2)>tol && i