CS计算机代考程序代写 scheme matlab algorithm Numerical Methods in Engineering (ENGR20005)

Numerical Methods in Engineering (ENGR20005)
Lecture 07 Dr. Leon Chan
lzhchan@unimelb.edu.au Department of Mechanical Engineering The University of Melbourne
Slides prepared by Prof.Andrew Ooi

L7.1:
Convergence of iterative methods

Example L07.1: In example L06.2, we showed that it was possible to find the solution to the following 4×4 system of linear algebraic equations below using both the LU Decomposition and Gauss elimination methods.
1234×1 3 3 4 8 9 x2 = 4 82043 x3 8 5678×4 10
In this example, try to find the solution using the Point Jacobi and Gauss-Seidel methods.
If you try to find the solution to the above 4×4 system using both Gauss-Seidel and Point Jacobi, you will find that they both “blow up” (numbers go to infinity). You will find the they do not converge. Does not work.
In Examples L06.3 and L06.4, we have found that both Gauss-Seidel and Point Jacobi work just fine for the system of equations
9 4 1 x1 {−17} 1 6 0 x2 = 4 1−2−6 x3 14
But now we find that they both do not work for
1234×1 3 3 4 8 9 x2 = 4 82043 x3 8 5678×4 10
Can we predict when they will work and when they won’t?
3

Iterative Methods
An iterative method of solving for the vector{X} would be
[M] {X}(k+1) = [N] {X}(k) + {C} , (7.1)
where {X}(k) is current (k’th) guess of true solution, {X}.
We would like to choose the matrix [M] such that Eq. (7.1) is easier
to solve than the original expression [M]{X}={C}.
Here, we will cover two different methods:
1. Point Jacobi method:[M] consists of only the diagonal elements
of [A].
Lecture Recap
2. Gauss-Seidel method:[M] consists of the lower triangular elements of [A].
4

Iterative Methods
The general procedure to solving linear systems of equations with an iterative method is
1. Define the [M] and [N] matrices
2. Obtain an initial guess,{X}(0)
3. Solve [M]{X}(k+1) = [N]{X}(k)+{C} for new guess
4. Define and obtain maximum absolute value of residual vector,
{r}= [A]{X}(k)-{C}
5. If maximum value of |{r}|>𝜺, repeat step 3. Else, end.
5
Lecture Recap

Convergence of Iterative Methods
Can we predict if/when the method(s) will converge? Or when they will “blow up”? Which methods converge faster?
To answer that question, consider under what conditions approximated solution, {X}(k), converges to exact solution {X}.
Define error matrix at the k’th iteration to be {ε}(k) = {X}(k) − {X}
(7.2)
Subtract from
[M] {X} = [N] {X} + {C} [M] {X}(k+1) = [N] {X}(k) + {C}
6

Convergence of Iterative Methods
= {X}(k) − {X}
[M] {X} = [N] {X} + {C} [M] {X}(k+1) = [N] {X}(k) + {C}
(7.2)
Subtract from you get
[M]({X}(k+1) − {X}) = [N]({X}(k) − {X}) (7.3) Sub (7.2) to (7.3), we get
Rearranging,
[M]{ε}(k+1) = [N]{ε}(k) {ε}(k+1) = [M]−1 [N]{ε}(k)
{ε}(k)
7

Convergence of Iterative Methods
Rearranging,
I will prove in the next slide that if
then
{ε}(k+1) = [P]{ε}(k) {ε}(k+1) = [P]k+1 {ε}(0)
{ε}(k+1) = [M]−1 [N]{ε}(k)
{ε}(k+1) = [P]{ε}(k), where [P] = [M]−1 [N]
i.e. error at the k+1’th iteration is related to initial error via the matrix [P].
8

{ε}(k+1) = [P]{ε}(k) {ε}(k+1) = [P]k+1 {ε}(0)
{ε}(k+1) = [P]{ε}(k) {ε}(1) = [P]{ε}(0)
{ε}(2) = [P]{ε}(1) = [P]2{ε}(0)
{ε}(3) = [P]{ε}(2) = [P]3{ε}(0)
{ε}(4) = [P]{ε}(3) = [P]4{ε}(0)
.. .. .. .. .. .. .. ..
{ε}(k+1) = [P]k+1 {ε}(0) 9

Convergence of Iterative Methods
Rearranging,
I will prove in the next slide that if
then
{ε}(k+1) = [P]{ε}(k) {ε}(k+1) = [P]k+1 {ε}(0)
{ε}(k+1) = [M]−1 [N]{ε}(k)
{ε}(k+1) = [P]{ε}(k), where [P] = [M]−1 [N]
i.e. error at the k+1’th iteration is related to initial error via the matrix [P].
10

Convergence of Iterative Methods
{ε}(k+1) = {ε}(0)
(7.4)
Error at the k+1’th iteration is related to initial error via the matrix [P].
Now assume that the matrix [P] diagonalizable, then
[]
λ00….0
[P] =
where columns of [S] are eigenvectors of [P] and [𝛬] is a diagonal
matrix whose elements are the eigenvalues of [P].
I will show on the next slide that if [P] can be written as Eq. (7.5)
then
[P]k+1 = [S] [Λ]k+1 [S]−1 11
[P]k+1
[S] [Λ] [S]−1
1
0 λ2 0 . . . . 0
0 0 λ3 . . . . 0
0
(7.5)
…….0
Λ=…….. …….. ……..
000…0λ 0
(7.6)
n

[P] = [S] [Λ] [S]−1
[P]2 = [S] [Λ] [S]−1 [S] [Λ] [S]−1 = [S] [Λ]2 [S]−1
[P] [P] [I] identity matrix
[P]
[P]3 = [S] [Λ]2 [S]−1 [S] [Λ] [S]−1 = [S] [Λ]3 [S]−1
If
[P] = [S] [Λ] [S]−1 then
[P]k+1 = [S] [Λ]k+1 [S]−1
[P]2
[I] identity matrix
[P]4 = [S] [Λ]3 [S]−1 [S] [Λ] [S]−1 = [S] [Λ]4 [S]−1
.. .. ..
[P]k+1 = [S] [Λ]k [S]−1 [S] [Λ] [S]−1 = [S] [Λ]k+1 [S]−1 12

Convergence of Iterative Methods
{ε}(k+1) = [P]k+1 {ε}(0)
(7.4)
Error at the k+1’th iteration is related to initial error via the matrix [P].
Now assume that all eigenvalues of the matrix [P] are linearly independent, then
[P] = [S] [Λ] [S]−1
where columns of [S] are eigenvectors of [P] and [𝛬] is a diagonal
matrix whose elements are the eigenvalues of [P].
I will show on the next slide that if [P] can be written as Eq. (7.5)
Substituting Eq. (7.6) into Eq. (7.4) gives
13
(7.5)
then
[P]k+1 = [S] [Λ]k+1 [S]−1
(7.6)

Convergence of Iterative Methods
Thus we can write the error at the k’th iteration as {ε}(k+1) = [S] [Λ]k+1 [S]−1 {ε}(0)
So in order for the error to go to zero as fast as possible, we would need the magnitude of all the eigenvalues of [P] to be less than 1. The smaller the eigenvalues, the faster the convergence.
Thus when
we require
λ1 0 0 0 λ2 0
[Λ] =
0 0 λ3
λi <1. 14 Convergence of Iterative Methods Let's again consider an example where [A]=[1 2 3] 322 232 When using the Gauss--Seidel approach, we specify [M] to be [M]= a11 0 0 =[1 0 0] a21a220 320 a31 a32 a33 232 [N] is computed as [N]=[M]-[A], thus [N]= 0 −a12 −a13 =[0 −2 −3] 0 0 −a23 0 0 −2 000000 15 Convergence of Iterative Methods Can now compute [P]= [M]-1 [N] 100 −1.5 0.5 0 [M]−1 = 1.25 −0.75 0.5 1 0 0[0−2−3] −1.5 0.5 0 0 0 −2 [P] = The eigenvalues of [P] are λ1 0 0 [Λ]= 0 λ2 0 = 0 0 λ3 0 −2.0 −3.0 0 3.0 3.5 0 −2.5 −2.25 000 0 0.375 + 1.3636i 0 0 0 0.375 + 1.3636i [P] = 1.25 −0.75 0.5 0 0 0 Therefore, |𝜆i=2,3|> 1, and thus the method will not converge for this
choice of [A].
16

Example L07.2:You are given the following two systems of equations below. Test and see if you can find the solution using the Gauss-Seidel and the Point Jacobi methods.
1234×13 941×1{−17} 3 4 8 9 x2 = 4 1 6 0 x2 = 4
8 20 4 3 x3 5678×4 10
8 1 −2 −6 x3 14
17

Lecture07M02.m
% Define your [A] and {C}
% Point Jacobi M=[9 0 0;
0 6 0;
0 0 -6]; N=M-A;
P=inv(M)*N;
% Find Eigenvalues of [P] eig(P)
% Gauss-Seidel M=[9 0 0;
1 6 0;
1 -2 -6]; N=M-A;
P=inv(M)*N;
% Find Eigenvalues of [P] eig(P)
% Define your [A] and {C}
% Point Jacobi M=[1 0 0 0;
0 4 0 0; 0 0 4 0; 0 0 0 8];
N=M-A; P=inv(M)*A; eig(P)
% Gauss-Seidel M=[1 0 0 0;
3 4 0 0; 8 20 4 0; 5 6 7 8];
N=M-A; P=inv(M)*A; eig(P)
Magnitude of eigenvalues for Point Jacobi are larger than Gauss- Seidel. Hence one would expect Gauss-Seidel to converge faster.
All eigenvalues for
both Point Jacobi
and Gauss-Seidel
have magnitude less than 1. Hence both schemes converges for
941×1 −17
1 6 0 x2 ={4} 1−2−6 x3 14
Some eigenvalues for
both Point Jacobi
and Gauss-Seidel
have magnitude greater than 1. Hence both schemes diverges for
1 2 3 4 x1 3 3 4 8 9 x2 = 4
8 20 4 3 5 6 7 8
A=[9 4 1; 1 6 0;
1 -2 -6]; C=[-17; 4;
14];
A=[1 2 3 4; 3 4 8 9;
8 20 4 3;
5 6 7 8]; C=[3;4;8;10];
9 4 1 x1 {−17} 1 6 0 x2 = 4 1−2−6 x3 14
1234×1 3 3 4 8 9 x2 = 4 82043 x3 8 5678×4 10
Eigenvalues of [M]-1[N]=[P] using Point Jacobi
Eigenvalues of [M]-1[N]=[P] using Gauss-Seidel
Eigenvalues of [M]-1[N]=[P] using Point Jacobi
x3 8
x
4 10
Eigenvalues of [M]-1[N]=[P] using Gauss-Seidel
18

Example L07.3: Solve
9 4 1 x1 {−17} 1 6 0 x2 = 4 1−2−6 x3 14
using both the Point Jacobi and Gauss-Seidel schemes. Which of these schemes converges faster and why?
19

Lecture07M03.m
counter=0;
counter=counter+1; disp(counter)
counter=0;
counter=counter+1; disp(counter)
A=[9 4 1; 1 6 0;
1 -2 -6]; C=[-17; 4;
14];
x=GaussSeidelWithCounter(A,C); x=PointJacobiWithCounter(A,C);
function x=GaussSeidelWithCounter(A,C) x=zeros(size(C));
[n,n]=size(A);
residual=A*x-C;
while max(abs(residual)) > 1.0e-8 for i=1:n
sum=0.0; for j=1:n if j~=i
sum=sum-A(i,j)*x(j); end
end
x(i)=(sum+C(i))/A(i,i); end
residual=A*x-C; end
end
function x=PointJacobiWithCounter(A,C) xold=ones(size(C));
x=xold;
[n,n]=size(A);
residual=A*x-C;
while max(abs(residual)) > 1.0e-8 for i=1:n
sum=0.0; for j=1:n if j~=i
sum=sum-A(i,j)*xold(j); end
end
x(i)=(sum+C(i))/A(i,i); end
xold=x; residual=A*x-C;
end end
Compared to GaussSeidel(), I have just
included a counter and incremented its value by 1 every time it executes the while() loop
Gauss-Seidel takes
8 iterations whereas Point Jacobi takes 17 iterations
to converge. Gauss-Seidel is almost always faster.
Compared to PointJacobi(), I have just
included a counter and incremented its value by 1 every time it executes the while() loop
20

L7.2:
Comparison of cost of different methods

Cost-Direct vs Iterative
Iterative methods
To solve the system [A] {X}= {C}, the number of operations of iterative methods can be estimated as follows: Each step of
x ( k + 1 ) = x k + 1 ∑n a i j x ( k ) i i aii j=1 j
requires n multiplications and one division, i.e. n+1 operations. Required for n components of xi, ⇒n(n+1)≈n2 operations
Total number of operations approximately n2 times number of iterations.
22

Cost-Direct vs Iterative
Cost of Iterative methods
≈n2k operations Cost of LU decomposition
Decomposition [A] into [L] [U]: 2/3n3 operations Iterative methods are more efficient when k 1.0e-8
for i=1:n sum=0.0;
for j=1:n if j~=i
sum=sum-A(i,j)*x(j); end
end
x(i)=(sum+C(i))/A(i,i); end
x=omega*x+(1-omega)*xold;
residual=A*x-C; xold=x counter=counter+1
end end
Define ω
There is an optimum value of omega that makes the system converges
fastest.
x(k+1) = ωx(k+1) + (1 − ω)x(k) iii
29

Beware
Bad Poem Coming Up
No new information beyond this point!
Watching is purely optional
30

Summary of Methods for Linear Algebraic Equations
..a x c a(n)a(n)a(n)a(n) . 1n 1 1 11 12 13 14
0 a(n) a(n) a(n) . 22 23 24
00a(n)a(n). 33 34
000a(n). 44 (n)
. . . .
. . . .
a(n) 1n
a (n) 2n
a (n) 3n
a(n) 4n
| | | |
a(n) 1,n+1
a (n) 2,n+1
a (n) 3,n+1
a(n) 4,n+1
an1 an2
We have learnt methods of solving linear algebraic equations
0 0 0 0 a55 . . . | . ……..|.
Direct methods are Gaussian Elimination and LU Decomposition Point Jacobi and Gauss-Seidel are methods that require iterations Which ones you should use will depend a bit on intuition [M ] {X}
aa
11 12
aa
..axc 2n 2 2
21 22
= ..a3nx3 c3
a31 a32 ..
….. ..annxn cn
……..|. 0 0 0 . . . . a(n) | a(n)
nn [A] = [L][U]
n,n+1
+ {C}
(k+1)
(k) = [N ] {X}
Iterative methods are cheap but they might not always converge If the magnitude of eig(M-1)N>1, they will blow up and diverge
Direct methods are expensive but solution will usually emerge
As long as aii ≠ 0 and your problem is not enormous! a(k+1) = a(k) − ( i,k ) a(k)
31
a(k)
ij ij a(k) k,j k,k

If you can think of a better poem summarising Linear Algebraic Equations, please post on discussion board for this subject under the topic
“Better Poem Summary for Linear Algebraic Equations”
32

L7.4:
Systems of nonlinear equations

Recall Newton-Raphson method for finding roots of the equation f(x)=0. This method can be extended to solve a system of nonlinear equations. Let’s rederive Newton-Raphson formula (Eq. (3.7) below) using Taylor’s series.
34

We would like to solve
f (x) = 0
f(x)=f(x0)+f′(x0)(x−x0)+ f′′(x0)(x−x0)2+ f(′′′)(x0)(x−x0)3+⋯+
We truncate Taylor’s series at the 2nd order term
f(x) = f(x0) + f′(x0)(x − x0)
We want to find the value of x such that f(x)=0.
If we set f(x)=0 in the above equation we will get f(x0) + f′(x0)(x − x0) = 0
Rearranging and setting x=xi+1 and x0=xi gives
f′(xi)(xi+1 − xi) = − f(xi)
We can solve for xi+1 to get
Newton-Raphson formula xi+1 = xi − f(xi)/f′(xi)
2! 3!

We would like to solve
f (x) = 0
f(x)=f(x0)+f′(x0)(x−x0)+ f′′(x0)(x−x0)2+ f(′′′)(x0)(x−x0)3+⋯+
We truncate Taylor’s series at the 2nd order term
f(x) = f(x0) + f′(x0)(x − x0)
We want to find the value of x such that f(x)=0.
If we set f(x)=0 in the above equation we will get
f(x0) + f′(x0)(x − x0) = 0 Rearranging and setting x=xi+1 and x0=xi gives
f′(xi)(xi+1 − xi) = − f(xi)
We can solve for xi+1 to get
xi+1 = xi − f(xi)/f′(xi)
2! 3!
We would like to solve
Truncated Taylor’s series for a function of 3 variables
All partial derivatives are evaluated at x1,0, x2,0 and x3,0 We want to find x1, x2 and x3 such that f1, f2 and f3=0.
If we set f1=f2=f3=0 on the left hand side of the equations above
f2(x1,0, x2,0, x3,0) + ∂f2 (x1 − x1,0) + ∂f2 (x2 − x2,0) + ∂f2 (x3 − x3,0) = 0 ∂x1 ∂x2 ∂x3
f3(x1,0, x2,0, x3,0) + ∂f3 (x1 − x1,0) + ∂f3 (x2 − x2,0) + ∂f3 (x3 − x3,0) = 0 ∂x1 ∂x2 ∂x3
Rearranging and setting xn=xn,i+1 and xn,0=xn,i
∂f1 (x1,i+1 − x1,i) + ∂f1 (x2,i+1 − x2,i) + ∂f1 (x3,i+1 − x3,i) = − f1(x1,i, x2,i, x3,i) ∂x1 ∂x2 ∂x3
∂f2 (x1,i+1 − x1,i) + ∂f2 (x2,i+1 − x2,i) + ∂f2 (x3,i+1 − x3,i) = − f2(x1,i, x2,i, x3,i) ∂x1 ∂x2 ∂x3
∂f3 (x1,i+1 − x1,i) + ∂f3 (x2,i+1 − x2,i) + ∂f3 (x3,i+1 − x3,i) = − f3(x1,i, x2,i, x3,i) ∂x1 ∂x2 ∂x3
∂f1 ∂f1 ∂f1 (∂x) (∂x) (∂x)
x1,i+1 − x1,i x2,i+1 − x2,i = x3,i+1 − x3,i
−f1(x1,i, x2,i, x3,i) −f2(x1,i, x2,i, x3,i) −f3(x1,i, x2,i, x3,i)
1i2i3i ∂f2 ∂f2 ∂f2
(∂x) (∂x) (∂x) 1i2i3i
(∂x) (∂x) (∂x) 1i2i3i
∂f3 ∂f3 ∂f3
36
f1(x1, x2, x3) = 0 f2(x1, x2, x3) = 0 f3(x1, x2, x3) = 0
f1(x1, x2, x3) = f1(x1,0, x2,0, x3,0) + ∂f1 (x1 − x1,0) + ∂f1 (x2 − x2,0) + ∂f1 (x3 − x3,0) ∂x1 ∂x2 ∂x3
f2(x1, x2, x3) = f2(x1,0, x2,0, x3,0) + ∂f2 (x1 − x1,0) + ∂f2 (x2 − x2,0) + ∂f2 (x3 − x3,0) ∂x1 ∂x2 ∂x3
f3(x1, x2, x3) = f3(x1,0, x2,0, x3,0) + ∂f3 (x1 − x1,0) + ∂f3 (x2 − x2,0) + ∂f3 (x3 − x3,0) ∂x1 ∂x2 ∂x3
f1(x1,0, x2,0, x3,0) + ∂f1 (x1 − x1,0) + ∂f1 (x2 − x2,0) + ∂f1 (x3 − x3,0) = 0 ∂x1 ∂x2 ∂x3

Better guesses of x1, x2 and x3
x1,i+1 −
x2,i+1 − = x3,i+1 −
Old guesses of x1, x2 and x3
[A] {X} = {C}
functions evaluated at old guesses of x1, x2 and x3
(∂f1 ) ∂x1 i
(∂f2 ) ∂x1 i
(∂f3 ) ∂x1 i
(∂f1 ) ∂x2 i
(∂f2 ) ∂x2 i
(∂f3 ) ∂x2 i
(∂f1 ) ∂x3 i
(∂f2 ) ∂x3 i
(∂f3 ) ∂x3 i
(7.8)
Jacobian
So we need to solve a linear system of equation to get a better guess, and then iterate
37
x1,i x2,i x3,i
−f1(x1,i, x2,i, x3,i) −f2(x1,i, x2,i, x3,i) −f3(x1,i, x2,i, x3,i)

Newton’s Method for nonlinear equations
Hence, to solve the original system of equations, do the following
1. Calculate all the required derivatives.
2. Guess the (initial) values for x1,i, x2,i and x3,i.
3. Solve Eq. (7.8) to obtain (x1,i+1-x1,i), (x2,i+1-x2,i) and (x3,i+1-x3,i)
You can use Gauss elimination, LU decomposition, Gauss-
Seidel etc.
4. Calculate x1,i+1, x2,i+1, x3,i+1.
5. Repeat steps 2, 3 and 4 until converged solution obtained.
This method is a more general version of the Newton-Raphson method taught in Lecture L3.3.
38

System of nonlinear equations: example
Let’s say you are asked to find the values of x1 and x2 such that.
x2
x2 = x2 + 1 1
x2 = 3 cos(x1) 5
4 3 2 1 0
-1
(7.9) (7.10)
x2 = x1)
x2 = x2 + 1 1
3 cos(
y=3cos(x)
y=x2+1
-2
0 0.5 1 2 0
Point that satisfies both Eqs. (7.9) and (7.10)
39
x1 x
y

x2 = x2 + 1 1
x2 = 3 cos(x1) We rearrange to get 2
f1(x1,x2)=x +1−x2 =0 1
f2(x1,x2)=3cos(x1)−x2 =0 (∂f1)(∂f1)[x −x][−f]
∂x1 i ∂x2 i 1,i+1 1,i = 1,i (∂f2 ) (∂f2 ) x2,i+1 −y2,i −f2,i
(7.11)
∂x1 i ∂x2 i
[ 2×1 −1] [x1,i+1 − x1,i] = [−f1,i]
−3 sin(x1) −1 x2,i+1 − y2,i −f2,i 40

Solve Eq. (7.11) for x1,i+1 and x2,i+1 and show that
x1,i+1 = x1,i + x2,i+1 = x2,i +
i
) − ( ∂f1 ) ( ∂f2 )
(7.12)
f2,i(∂f1 ∂x2
( ∂f1 ) ( ∂f2 ∂x1 i ∂x2
f1,i(∂f2 ∂x1
( ∂f1 ) ( ∂f2 ∂x1 i ∂x2
) −f1,i(∂f2 ) i ∂x2
i ∂x2 i ∂x1 i
) −f2,i(∂f1 ) i ∂x1
) − ( ∂f1 ) ( ∂f2 ) i ∂x2 i ∂x1 i
i
(7.13)
41

Can use MATLAB function fsolve() Lecture07Extra.m to solve system of nonlinear algebraic equations.
Initial guesses for x1=1, x2 =1 xmatlab1 = fsolve(@NonlinearExample,[1;1]);
Initial guesses for x1=1, x2 =1
clear all
Initial guesses for x1=1, x2 =1
options = optimoptions(@fsolve,’Display’,’iter’,’SpecifyObjectiveGradient’,true); [xmatlab2,F,exitflag,output,JAC] = fsolve(@NonlinearExampleWithJacobian,[1;1],options);
x=[1;1] [f,J]=NonlinearExampleWithJacobian(x) while max(abs(f)) >1.0e-6
det=J(1,1)*J(2,2)-J(1,2)*J(2,1); x(1)=x(1)+(f(2)*J(1,2)-f(1)*J(2,2))/det; x(2)=x(2)+(f(1)*J(2,1)-f(2)*J(1,1))/det; [f,J]=NonlinearExampleWithJacobian(x)
end
f2,i ∂f1 − f1,i ∂f2 (∂x ) (∂x )
x=x+2i2i ∂f1 ∂f2 − ∂f1 ∂f2
1,i+1 1,i (∂x )(∂x ) (∂x )(∂x ) 1i2i 2i1i
f1,i ∂f2 − f2,i ∂f1 (∂x ) (∂x )
x=x+1i1i ∂f1 ∂f2 − ∂f1 ∂f2
2,i+1 2,i (∂x )(∂x ) (∂x )(∂x ) 1i2i 2i1i
function [f,J] = NonlinearExampleWithJacobian(x) f(1) = x(1)*x(1)+1-x(2);
f(2) = 3*cos(x(1))-x(2);
J(1,1)=2*x(1);J(1,2)=-1; J(2,1)=-3*sin(x(1));J(2,2)=-1
end
function f = NonlinearExample(x) f(1) = x(1)*x(1)+1-x(2);
f(2) = 3*cos(x(1))-x(2);
end
f 1 ( x 1 , x 2 ) = x 12 + 1 − x 2 f2(x1, x2) = 3 cos(x1) − x2
∂f1 ∂f1 [ ] (∂x) (∂x)
1 i 2 i = 2×1 −1 ∂f2 ∂f2 −3 sin(x1) −1
(∂x) (∂x) 1i 2i
f 1 ( x 1 , x 2 ) = x 12 + 1 − x 2 f2(x1, x2) = 3 cos(x1) − x2
42

Example L07.5: Use the Newton-Raphson Method to solve the following set of 3 nonlinear algebraic equations
3×1 + cos(x2x3) = 12
x12 − 81(x2 + 0.1)2 + sin(x3) = − 1.06
e−x1x2 + 20×3 + 10π − 3 = 0 3
Check your solution with the MATLAB function fsolve(). Note that you need to install the Optimisation toolbox in MATLAB before you can use the fsolve() function.
43

Nonlinear equations – Example
We want to solve the following set of nonlinear algebraic equations
3×1 + cos(x2x3) = 12
x2 − 81(x2 + 0.1)2 + sin(x3) = − 1.06
1 10π−3 e−x1x2 + 20×3 + 3
= 0
To use the Newton–Raphson method, we define the functions f1(x1,x2,x3), f2(x1,x2,x3), f3(x1,x2,x3)
f1(x1, x2, x3) = 3×1 + cos(x2x3) − 12 = 0
f2(x1, x2, x3) = x2 − 81(x2 + 0.1)2 + sin(x3) + 1.06 = 0
1 10π−3 f3(x1,x2,x3)=e−x1x2 +20×3+ 3
=0.
44

Nonlinear equations – Example
The corresponding partial derivatives are
∂f1 ∂x1
∂f2 ∂x1
= 3 = 2×1
∂f1 = − x3 sin(x2x3) ∂x2
∂f2 = − 16.2 − 162×2 ∂x2
∂f3 = − x1e−x1x2 ∂x2
∂f1 = − x2 sin(x2x3) ∂x3
∂f2 = cos(x3) ∂x3
∂f3 = 20 ∂x3
−x2 sin(x2x3)
cos(x3) (7.14)
20
∂f3 ∂x1
= − x2e−x1x2
The Jacobian matrix [J] is hence
[J] =
∂f1 ∂f1 ∂f1 ∂x1 ∂x2 ∂x3 ∂f2 ∂f2 ∂f2 ∂x1 ∂x2 ∂x3 ∂f3 ∂f3 ∂f3 ∂x1 ∂x2 ∂x3
3 [J] = 2×1
−x2e−x1x2 45
−x3 sin(x2x3) −16.2 − 162×2
−x1e−x1x2

Nonlinear equations – Example
Initial guess (x1,0,×2,0,x3,0)=(0,0,0), substitute into Eq. (7.14):
3.0 0.0 0.0 0.0 −16.2 1.0
− − −
=
−0.5000000000 −0.25 −10.47197551
x1,1 x2,1 x3,1
x1,0 x2,0 x3,0
0.0
3.0 −0.33334 0.016841
3.0 −0.33331 0.014744
0.0 20.0
0.0046301 0.00014934 −13.464 0.86602
0.16619 20.0
0.0040504 0.00011437 −13.805 0.86608
0.166246 20.0
− − −
− − −
0.000039 = −0.02827
0.0028
0.00000061 0.000359 −0.0000
46
x1,2 x2,2 x3,2
x1,1 x2,1 x3,1
x1,3 x2,3 x3,3
x1,2 x2,2 x3,2
=
Iter = 2
Iter = 1
Iter = 0

Nonlinear equations – Example
and so on. With every iteration, the right hand side becomes smaller, until it meets convergence criterion (𝜀) and
x1,n+1 = x1,n = − 0.16667 x1,n+1 = x2,n = − 0.01481 x3,n+1 = x3,n = − 0.523476 .
47

Lecture07M05.m
Initial guesses for x1, x2, and x3
f1(x1, x2, x3) = 3×1 + cos(x2 x3) − 12
f2(x1, x2, x3) = x12 − 81(x2 + 0.1)2 + sin(x3) + 1.06
f3(x1, x2, x3) = e−x1x2 + 20×3 + 10π − 3 . 3
3 −x3 sin(x2x3) [J ] = 2×1 −16.2 − 162×2
−x2e−x1x2 −x1e−x1x2 x3) + 1.06
−x2 sin(x2x3) cos(x3) 20
xmatlab1 = fsolve(@NonlinearExample,[1;1;1]);
options = optimoptions(@fsolve,’Display’,’iter’,’SpecifyObjectiveGradient’,true); [xmatlab2,F,exitflag,output,JAC] = fsolve(@NonlinearExampleWithJacobian,[1;1;1],options);
x=[1;1;1] [f,J]=NonlinearExampleWithJacobian(x); while max(abs(f)) > 1.0e-8
[f,J]=NonlinearExampleWithJacobian(x); [L,U]=LUDecomposition(J); R=LowerTriangularSolver(L,-f); diff=UpperTriangularSolver(U,R); x=diff+x
end
function [f,J] = NonlinearExampleWithJacobian(x) f(1) = 3*x(1)+cos(x(2)*x(3))-1/2;
f(2) = x(1)^2-81*(x(2)+0.1)^2 +sin(x(3)) + 1.06; f(3)=exp(-x(1)*x(2)) +20*x(3)+(10*pi-3)/3;
J(1,1)=3;J(1,2)=-x(3)*sin(x(2)*x(3));J(1,3)=-x(2)*sin(x(2)*x(3)); J(2,1)=2*x(1);J(2,2)=-16.2-162*x(2);J(2,3)=cos(x(3)); J(3,1)=-x(2)*exp(-x(1)*x(2));J(3,2)=-x(1)*exp(-x(1)*x(2));J(3,3)=20; end
function f = NonlinearExample(x)
f(1) = 3*x(1)+cos(x(2)*x(3))-1/2;
f(2) = x(1)^2-81*(x(2)+0.1)^2 +sin(x(3)) + 1.06; f(3)=exp(-x(1)*x(2)) +20*x(3)+(10*pi-3)/3
end
function [L,U]=LUDecomposition(A) [n,n]=size(A);
L=zeros(n,n);
U=zeros(n,n);
for i=1:n L(i,1)=A(i,1);
end
f1(x1, x2, x3) = 3×1 + cos(x2 x3) − 12
f2(x1, x2, x3) = x12 − 81(x2 + 0.1)2 + sin(
f3(x1, x2, x3) = e−x1x2 + 20×3 + 10π − 3 3
48

Lecture07M05.m
xmatlab1 = fsolve(@NonlinearExample,[1;1;1]);
options = optimoptions(@fsolve,’Display’,’iter’,’SpecifyObjectiveGradient’,true); [xmatlab2,F,exitflag,output,JAC] = fsolve(@NonlinearExampleWithJacobian,[1;1;1],options);
Initial guesses for x1, x2, and x3 while max(abs(f)) > 1.0e-8
[f,J]=NonlinearExampleWithJacobian(x); end
x=[1;1;1]
[L,U]=LUDecomposition(J); R=LowerTriangularSolver(L,-f); diff=UpperTriangularSolver(U,R); x=diff+x
function f = NonlinearExample(x)
f(1) = 3*x(1)+cos(x(2)*x(3))-1/2;
f(2) = x(1)^2-81*(x(2)+0.1)^2 +sin(x(3)) + 1.06; f(3)=exp(-x(1)*x(2)) +20*x(3)+(10*pi-3)/3
end
function [L,U]=LUDecomposition(A)
49
[J ] =
3 −x3 sin(x2x3) 2×1 −16.2 − 162×2
−x2e−x1x2 −x1e−x1x2
−x2 sin(x2x3) cos(x3) 20
f1(x1, x2, x3) = 3×1 + cos(x2 x3) − 12
f2(x1, x2, x3) = x12 − 81(x2 + 0.1)2 + sin(x3) + 1.06
f3(x1, x2, x3) = e−x1x2 + 20×3 + 10π − 3 . 3
[f,J]=NonlinearExampleWithJacobian(x);
∂f1 ∂f1 ∂f1 (∂x) (∂x) (∂x)
x1,i+1 − x1,i x2,i+1 − x2,i = x3,i+1 − x3,i
−f1(x1,i, x2,i, x3,i) −f2(x1,i, x2,i, x3,i)
−f(x ,x ,x ) 3 1,i 2,i 3,i
1i2i3i ∂f2 ∂f2 ∂f2
(∂x) (∂x) (∂x) 1i2i3i
(∂x) (∂x) (∂x) 1i2i3i
∂f3 ∂f3 ∂f3
function [f,J] = NonlinearExampleWithJacobian(x) f(1) = 3*x(1)+cos(x(2)*x(3))-1/2;
f(2) = x(1)^2-81*(x(2)+0.1)^2 +sin(x(3)) + 1.06; f(3)=exp(-x(1)*x(2)) +20*x(3)+(10*pi-3)/3;
J(1,1)=3;J(1,2)=-x(3)*sin(x(2)*x(3));J(1,3)=-x(2)*sin(x(2)*x(3)); J(2,1)=2*x(1);J(2,2)=-16.2-162*x(2);J(2,3)=cos(x(3)); J(3,1)=-x(2)*exp(-x(1)*x(2));J(3,2)=-x(1)*exp(-x(1)*x(2));J(3,3)=20; end
[n,n]=size(A);