程序代写代做 Unit 1.6 Iterative Methods

Unit 1.6 Iterative Methods
For many very large matrices, matrix inversion or matrix decompositions are too expensive computationally. We want to solve without doing a decomposition.
Jacobi Method
First consider a system
This can be rewritten
The idea behind an interative method is that if I put in “good” values for , “better” values on the left-hand side.
We start with a guess at the solution
and iterate to get a better guess
This is called the Jacobi Iterative Method or just the Jacobi Method.
, and
on the right-hand side, I will get back

For
, it has the general form
.
The answer will not usually be exact after iterations.
So how many iterations should we take?
The standard is to pick some tolerance for the relative error, and terminate once the relative error is within the tolerance. It is easier in this case to use the relative error in rather than , called the relative residual:
Gauss-Seidel Method
Since we have to do the calculations in some order, we can use information as we get it, i.e.,
This is called the Gauss-Seidel Method and usually converges to the solution faster than the Jacobi Method.
For
, it has the general form, following Demmel 1997,
.

In [ ]:
using LinearAlgebra function GS(A,b,tol)
n = size(A,1)
x = zeros(n,1)
relRes = norm(A*x-b)/norm(b) while relRes > tol
for j = 1:n
s = sum(A[j,:].*x) – A[j,j]*x[j] x[j] = ( b[j] – s )/A[j,j]
end
relRes = norm(A*x-b)/norm(b)
end
return x end
In [ ]:
In [ ]:
A =
b = rand(4,1)
xTrue = A\b
xApprox = GS(A,b,1e-3)
rand(4,4)
This didn’t work. Why?
There is a convergence condition on both Jacobi and Gauss-Seidel.
We won’t get into the details, but each of these methods corresponds a splitting of the matrix into and and , .
Note that a splitting is different than a decomposition, there is a subtraction instead of a multiplication. We then solve as
The idea is to take such that is well-conditioned and easy to solve. We then can iterate
For the Jacobi iteration, is the diagonal of and is the negative of everyting not on the diagonal. For Gauss-Seidel, is the diagonal and everything below it, and is the negative of what is left.
We will omit the details, but if we define , our convergence condition becomes

For both Jacobi and Gauss-Seidel, to get we need to be diagonally dominant,
for all .
Revisiting with an appropriate matrix, , we have
In[]: B=[4.1.1.0.;0.-3.0.5-1.;0.0.12.0.1;-1.-0.10.6.] In [ ]: xTrue = B\b
In [ ]: xApprox = GS(B,b,1e-3)
Julia implements iterative solvers using a package, IterativeSolvers.jl
Documentation: https://juliamath.github.io/IterativeSolvers.jl/latest/ (https://juliamath.github.io/IterativeSolvers.jl/latest/)
GitHub Repository: https://github.com/JuliaMath/IterativeSolvers.jl (https://github.com/JuliaMath/IterativeSolvers.jl)
In [ ]: import Pkg; Pkg.add(“IterativeSolvers”) using IterativeSolvers
xApprox = jacobi(B,b)
Gauss-Seidel and Jacobi are both relatively simple methods that are used mainly to help out other iterative methods.
More commonly used methods in practice are successive overrelaxation (SOR), Symmetric SOR with Chebyshev Acceleration (SSOR), and Krylov Subspace methods such as Conjugate Gradient and GMRES.
Many of these are in the IterativeSolvers.jl package.