Computational Methods
for Physicists and Materials Engineers
5
Systems of Nonlinear Equations
xi are unknowns to be determined Aij and bi are coefficients (knowns)
Matrix form:
Recap: system of linear equations
N linear equations: AxAxAxb
11 1 12 2 1N N 1 AxAxAxb
21 1 22 2 2N N 2
AxAxAxb N1 1 N2 2 NN N N
AAAxb 11 12 1N 1 1
21 22 2N 2 2 AAAxb
A A Ax b
N1 N2 NN N N A xb
System of nonlinear equations
From linear to nonlinear
Ax b
(I I A)x b
If g is a linear operator
x (I A) x b f(x)
linear constant A(αx) αAx
operator vector
However, in general, f cannot be written in this form, e.g. f(x) (xT x)a
f(x) sin(x)
If so, the system of equations is nonlinear
System of nonlinear equations can only be solved by iterative method
g(αx) αg(x) For a matrix A
Root finding problem: Single‐variate single equation
x4 5×3 2×2 x10 Single‐variate nonlinear equation
-4 -2 0
sin(x) 1 x
f 0 -20
System of nonlinear equations: The product of two numbers is 10 and the sum of their squares is 36; what are the numbers?
x
xy 10
x2 y2 36
6 y0 -6
Problems
40 f 0 -40
2
x
10 20
-6 0 6
x
R
If there are two phases, we have g(x) for each phase:
Problems
Calculation of phase diagrams
Consider an A‐B binary system. Molar Gibbs free energy is a function of
the composition of B, x. E.g. from regular solution model, gα(x)(1x)gα xgα (1x)xωα RT (1x)ln(1x)xlnx
gαA, g αB ωα
: molar Gibbs free energies for pure A and B in α structure : the parameter which describes interaction energy in α
: gas constant (8.314 J∙K‒1∙mol‒1)
AB
gβ(x)(1x)gβ xgβ (1x)xωβ RT (1x)ln(1x)xlnx
gα(x)(1x)gα xgα (1x)xωα RT (1x)ln(1x)xlnx
AB AB
Both gα and gβ vary with temperature T. Phase diagram can be produced by the common tangent construction at each T
T
Calculation of phase diagrams
g
At each temperature T, the slope of gα at x1 is same as the slope of gβ at x2:
x
Problems
gα(x)(1x)gα xgα (1x)xωα RT (1x)ln(1x)xlnx
AB
gβ(x)(1x)gβ xgβ (1x)xωβ RT (1x)ln(1x)xlnx
AB
Both gα and gβ vary with temperature T. Find the tangent line common for two curves. The tangent points x1 and x2 correspond to the phase boundaries at this T
gα gβ
x xx1 x xx2
T
Calculation of phase diagrams
g
gβ
The two lines should be the same line: f1(x) = f2(x)
x
xx 1
f (x)gα(x ) f (x)gβ(x )
(xx ) (xx )
Problems
gα(x)(1x)gα xgα (1x)xωα RT (1x)ln(1x)xlnx
AB
gβ(x)(1x)gβ xgβ (1x)xωβ RT (1x)ln(1x)xlnx
AB
The tangent lines at x1 of gα and at x2 of gβ are
gα
1 1 x 1
xx1
2 2 x 2 x 2
gα gα x gα(x )
gα
xx2 xx1
x gβ(x ) x 1 1 x 2 2
gα x
2121
xx1
xx1
(x x )gβ(x )gα(x )
(xx )
T
Calculation of phase diagrams
g
x
In reality, the function g(x) is more complicated than the regular solution model. Numerical method is required
AB
gα(x)gα gα (12x)ωα RTln BA
x 1x
gβ(x)gβ gβ (12x)ωβ RTln BA
x 1x
Problems
gα(x)(1x)gα xgα (1x)xωα RT (1x)ln(1x)xlnx
AB
gβ(x)(1x)gβ xgβ (1x)xωβ RT (1x)ln(1x)xlnx
For each T, solve the system of nonlinear equations: gα(x )gβ(x )
12
gα(x )(x x )gβ(x )gα(x ) 12121
System of nonlinear equations
Fixed‐point problem:
Find x such that f(x) = x
Root‐finding problem:
Find x such that f(x) = 0
Basics of iterative method: Banach fixed point theorem
An operator f is called contraction if there exists a constant q [0, 1) s.t.
f(x)f(y) q xy forallx,y Banach fixed point theorem:
Let f be a contraction. There exists a unique element x* (fixed point) s.t. f(x*)x*
The sequence constructed by
xn1 :f(xn)
converges to the unique fixed point x* with arbitrary initial guess x0
Priori error estimate: Posteriori error estimate:
qn
xn x* 1q x1 x0
xnx* q xnxn1 1q
Banach fixed point theorem for 1 equation w/ 1 variable
An operator f is called contraction if there exists a constant q [0, 1) s.t. f(x)f(y)qxy forallx,y
Banach fixed point theorem:
Let f be a contraction. There exists a unique solution x* (fixed point) s.t.
f (x*) x * The sequence constructed by
Find the intersection between 2 curves:
y = x and y = f(x)
xn1 : f(xn)
converges to the unique fixed point x* with arbitrary initial guess x0
Priori error estimate: Posteriori error estimate:
qn
xn x 1q x1 x0
xnx q xnxn1 1q
Requirement for f: There exists a constant q [0, 1) s.t. f(x)f(y)qxy forallx,y
What does this mean?
Recap of calculus
The mean value theorem: f(x) is continuous on [a, b] and differentiable on (a, b). There exists ξ (a, b) s.t.
f(ξ) f(b) f(a) ba
f(x) f(b)
f(a)
aξ ξ’b
x
Requirement for f: There exists a constant q [0, 1) s.t. f(x)f(y)qxy forallx,y
What does this mean?
There exists ξ (x, y) s.t.
f(ξ) f(x)f(y)
xy
f(x) f(y) f(ξ)(xy) f(ξ) xy sup f(ξ) xy
f (x) f (y) q x y So, convergence condition can be restated as:
sup f(x) 1 xD
In whole range slope of curve always < 1
ξD
q
y
y=x
Example f (x) x
x
y = f(x)
y
y=x
Fixed point
y = f(x)
x*
x
Example f (x) x
Since slope |f’(x)| < 1 for all x, there is a unique solution. We can use successive approximation: xn+1 := f(xn) to find this solution
Example f (x) x
Since slope |f’(x)| < 1 for all x, there is a unique solution. We can use successive approximation: xn+1 := f(xn) to find this solution
y
y=x
x* x0
x
Initial guess
y = f(x)
f(x0)
y = f(x)
y
y=x
Example f (x) x
Since slope |f’(x)| < 1 for all x, there is a unique solution. We can use successive approximation: xn+1 := f(xn) to find this solution
x* x0
x
f(x0)
y = f(x)
y
y=x
Example f (x) x
Since slope |f’(x)| < 1 for all x, there is a unique solution. We can use successive approximation: xn+1 := f(xn) to find this solution
x* x1 x0 = f(x0)
x
f(x1)
Example f (x) x
Since slope |f’(x)| < 1 for all x, there is a unique solution. We can use successive approximation: xn+1 := f(xn) to find this solution
y
y=x
x* x1 x0
x
y = f(x)
f(x1)
Example f (x) x
Since slope |f’(x)| < 1 for all x, there is a unique solution. We can use successive approximation: xn+1 := f(xn) to find this solution
y
y=x
x*x2 x1 x0 = f(x1)
x
y = f(x)
f(x2)
Example f (x) x
Since slope |f’(x)| < 1 for all x, there is a unique solution. We can use successive approximation: xn+1 := f(xn) to find this solution
y
y=x
x*x2 x1 x0
x
y = f(x)
f(x2)
Example f (x) x
Since slope |f’(x)| < 1 for all x, there is a unique solution. We can use successive approximation: xn+1 := f(xn) to find this solution
y
y=x
x*x3 x2 x1 x0 = f(x2)
x
y = f(x)
Example f (x) x
Since slope |f’(x)| < 1 for all x, there is a unique solution. We can use successive approximation: xn+1 := f(xn) to find this solution
y
y=x
x*x3 x2 x1 x0
x
limxn x* n
y = f(x)
Homework problem
Try this f(x) with |f’(x)| > 1
y
y=x
x0 Try this initial guess
x
Example f (x) x
y = f(x)
y
y=x
Another example f (x) x
y = f(x) x
y
y=x
Another example f (x) x
Since slope |f’(x)| < 1 for all x, there is a unique solution. We can use successive approximation: xn+1 := f(xn) to find this solution
Fixed point
x*
y = f(x) x
y
y=x
Another example f (x) x
Since slope |f’(x)| < 1 for all x, there is a unique solution. We can use successive approximation: xn+1 := f(xn) to find this solution
x* x0 Initial
y = f(x) x
guess
f(x0)
y = f(x) x
y
y=x
Another example f (x) x
Since slope |f’(x)| < 1 for all x, there is a unique solution. We can use successive approximation: xn+1 := f(xn) to find this solution
x* x0
f(x0)
y = f(x) x
y
y=x
Another example f (x) x
Since slope |f’(x)| < 1 for all x, there is a unique solution. We can use successive approximation: xn+1 := f(xn) to find this solution
x1 x* x0 = f(x0)
f(x1)
y
y=x
Another example f (x) x
Since slope |f’(x)| < 1 for all x, there is a unique solution. We can use successive approximation: xn+1 := f(xn) to find this solution
x1 x* x0
y = f(x) x
f(x1)
y
y=x
Another example f (x) x
Since slope |f’(x)| < 1 for all x, there is a unique solution. We can use successive approximation: xn+1 := f(xn) to find this solution
x1 x* x2 x0 = f(x1)
y = f(x) x
f(x2)
y
y=x
Another example f (x) x
Since slope |f’(x)| < 1 for all x, there is a unique solution. We can use successive approximation: xn+1 := f(xn) to find this solution
x1 x* x2 x0
y = f(x) x
f(x2)
y
y=x
Another example f (x) x
Since slope |f’(x)| < 1 for all x, there is a unique solution. We can use successive approximation: xn+1 := f(xn) to find this solution
x1 x3x* x2 x0 = f(x2)
y = f(x) x
f(x3)
y
y=x
Another example f (x) x
Since slope |f’(x)| < 1 for all x, there is a unique solution. We can use successive approximation: xn+1 := f(xn) to find this solution
x1 x3x* x2 x0
y = f(x) x
f(x3)
Another example f (x) x
Since slope |f’(x)| < 1 for all x, there is a unique solution. We can use successive approximation: xn+1 := f(xn) to find this solution
y
y=x
x x x*x x x 13420
y = f(x) x
= f(x3)
y
y=x
Another example f (x) x
Since slope |f’(x)| < 1 for all x, there is a unique solution. We can use successive approximation: xn+1 := f(xn) to find this solution
x x x*x x x 13420
y = f(x) x
limxn x* n
Homework problem Try this f(x) with |f’(x)| > 1
y
y=x
Another example f (x) x
x0 Try this initial guess
y = f(x) x
If the convergence condition is not satisfied, how to rearrange the equation s.t. the condition is satisfied?
y
y = f(x) 45o
y=x
f(x)x withsupf(x)1 xD
y f(x) x g(y) x g(x) yx
α2 > 45o
x
x=y x
45o
α2 < 45o y
x = g(y)
If the convergence condition is not satisfied, how to rearrange the equation s.t. the condition is satisfied?
g(x)x withsupg(x)1 xD
maxiter = 1000
tol = 1e-10
x = 10.0
for n in range(1, maxiter+1):
# max allowed iterations
# tolerance
# initial guess
xprev = x
x = f(x)
Python : Fixed‐point iteration
if abs(x - xprev) < tol:
print(f"Solution is x = {x}")
break
x f(x)0.4arctan(x) f(x) 0.4
x2 1
Always |f’(x)| ≤ 0.4. So, xn+1 := f(xn) converges to the fixed point x*
import numpy as np
def f(x):
return 0.4 * np.arctan(x)
if n == maxiter:
print(f"Convergence is not achieved in {n} steps!")
import scipy.optimize
import numpy as np
def f(x):
return 0.4 * np.arctan(x)
Python : Fixed‐point iteration
x f(x)0.4arctan(x)
print(scipy.optimize.fixed_point(f, [10.0], xtol=1e-10, maxiter=1000))
# scipy.optimize.fixed_point(function, [initial guess], tolerance, maximum allowed steps)
Python : Fixed‐point iteration
x f (x) 10arctan(x) f(x) 10
x2 1
About x = 0, |f’(x)| > 1. So, xn+1 := f(xn) won’t find x* = 0
f f(x)0 ‐20
20
‐20 ‐10 0 10 20
x
g(f)
f 10arctan(x)xtan(0.1f)g(f) f g(f)
Aboutf=0,|g’(f)|<1.So,fn+1 :=g(fn)canfindf*=0
f x
Banach fixed point theorem for a system of equations w/ N variables x f(x ,x ,,x )
Jacobian matrix of f is
1 112 N 2212N
xx , f(x)f(x ,x ,,x )
x f (x ,x ,,x ) N N12 N
f ff 111
x1 x2 xN f f f f
222 f(x)xx1 x2 xN
ff f
MMM
xx x 12N
Banach fixed point theorem for a system of equations w/ N variables Let f be continuous and differentiable with the property:
There exists a unique solution x* s.t. f(x*)x*
The sequence constructed by
xn1 :f(xn)
sup f(x) 1 xD
converges to the unique fixed point x* with arbitrary initial guess x0
Priori error estimate: Posteriori error estimate:
qn
xn x* 1q x1 x0
xnx* q xnxn1 1q
Example Solve the system of nonlinear equations:
x1 0.5cosx1 0.5sinx2 x2 0.5sinx1 0.5cosx2
f f
1 1
f(x)x1 x2 0.5sinx1 0.5cosx2
0.5cosx 0.5sinx x1 x2
f f
2 2 1 2
(0.5sin x )2 (0.5cos x )2 (0.5cos x )2 (0.5sin x )2 1/2 0.5 21122
f(x)
So, we can construct the successive approximation:
s.t.
xn1 :f(xn) x1,n1 0.5cosx1,n 0.5sinx2,n
limxn x* x2,n1 0.5sinx1,n 0.5cosx2,n n
import numpy as np
def f(x1, x2):
f1 = 0.5*np.cos(x1) - 0.5*np.sin(x2) f2 = 0.5*np.sin(x1) + 0.5*np.cos(x2) return (f1, f2)
maxiter = 10000 # max allowed iterations tol = 1e-10 # tolerance
x1_arr = np.array([]); x2_arr = np.array([])
x1 = 0; x2 = 0
x1_arr = np.append(x1_arr, x1)
x2_arr = np.append(x2_arr, x2)
for n in range(1, maxiter+1):
# initial guess
Example
x1_prev = x1; x2_prev = x2
x1, x2 = f(x1, x2)
x1_arr = np.append(x1_arr, x1)
x2_arr = np.append(x2_arr, x2)
if np.linalg.norm([x1 - x1_prev, x2 - x2_prev], 2) < tol:
break
if n == maxiter:
print(f"Solution is (x1, x2) = ({x1}, {x2}).")
print(f"Convergence is not achieved in {n} steps!")
import scipy.optimize
import numpy as np
Example Use SciPy
def f_arr(x_arr):
return np.array([0.5*np.cos(x_arr[0]) - 0.5*np.sin(x_arr[1]),
0.5*np.sin(x_arr[0]) + 0.5*np.cos(x_arr[1])])
# note that function used by scipy.optimize.fixed_point must return numpy array
print(scipy.optimize.fixed_point(f_arr, [0., 0.], xtol=1e-10, maxiter=1000))
x2
Example Solve the system of nonlinear equations:
0.8 0.6 0.4 0.2
Different initial guesses converge to the same fixed point
x1 0.5cosx1 0.5sinx2 x2 0.5sinx1 0.5cosx2
0.0
0.0 0.2
x 0.4 0.6 1
For example,
f(x)0 f(x)x2 10
Root finding : Newton’s method
Solve the system of nonlinear equations: f(x)0
This is a root‐finding problem
Again, we start from 1D. Solve a nonlinear equation:
f (x) sin(x)cos(2x) f(x)cos(x)x3
If g is a linear operator
g(αx) αg(x)
Find x s.t. f(x) = 0
Newton’s method for 1 variable
f(x)
x
x
to 1st order,
f(x) f(x0) f(x0)(xx0) f(x)
Newton’s method for 1 variable
Let x0 be approximation to the solution. By Taylor expansion about x = x0
x
x0
x
Newton’s method for 1 variable
Let x0 be approximation to the solution. By Taylor expansion about x = x0
to 1st order,
f(x) f(x0) f(x0)(xx0)
f(x )f(x )(x x )0x x f(x0)
001010
f (x0)
f(x)
x
x1 x0
x
to 1st order,
f(x) f(x1) f(x1)(xx1) f(x)
Newton’s method for 1 variable
Let x0 be approximation to the solution. By Taylor expansion about x = x1
x
x1 x0
x
Newton’s method for 1 variable
Let x0 be approximation to the solution. By Taylor expansion about x = x1
to 1st order,
f(x) f(x1) f(x1)(xx1)
f(x )f(x )(x x )0x x f(x1)
112121
f (x1)
f(x)
x
x2 x1 x0
x
Newton’s method for 1 variable
f(x)
xx n1 n
f(xn) f(xn)
limxn x n
x
x2 x1 x0
x
import numpy as np fromsympyimport*
f (x ) xn1 xn n
Newton’s method for 1 variable
Example
f(x)cos(x)x3 0 f(x)sin(x)3x2
x = symbols('x', real=True)
expr = cos(x) - x**3
dexpr = diff(expr, x)
f = lambdify([x], expr, "numpy")
fprime = lambdify([x], dexpr, "numpy")
# convert sympy expression to python function (see Lecture 2) maxiter = 10000
tol = 1e-5
x = 0.5 # if initial guess is x = 0, won’t converge for n in range(1, maxiter+1):
xprev = x
x = x - f(x) / fprime(x)
if abs(x - xprev) < tol:
print(f"Solution is x = {x}")
break
if n == maxiter:
print(f"Convergence is not achieved in {n} steps!")
f (xn )
import numpy as np
import scipy.optimize
Newton’s method for 1 variable
Example
f(x)cos(x)x3 0 f(x)sin(x)3x2
x = symbols('x', real=True)
expr = cos(x) - x**3
dexpr = diff(expr, x)
f = lambdify([x], expr, "numpy") fprime = lambdify([x], dexpr, "numpy")
Use SciPy
print(scipy.optimize.newton(f, 0.5, fprime=fprime, tol=1e-5, maxiter=10000))
# scipy.optimize.newton(function, initial guess, derivative of function, tolerance, maximum allowed steps)
# it’s possible that there are multiple solutions. try different initial guess
# if fprime is not provided, secant method will be applied
Newton’s method for N variables
Let x0 be approximation to the solution x. By Taylor expansion of the
multi‐variate function about x = x0 to 1st order, f(x) f(x0 ) f(x0 )(x x0 )
Recall: f’(x) denotes Jacobian matrix
f(x0)f(x0)(x1 x0)0x1 x0 [f(x0)]1f(x0)
xn1 xn [f(xn)]1f(xn)
In practice, NOT calculate the inverse of Jacobian. At Step (n), we solve
the system of linear equations:
f(xn ) (xn xn1 ) f(xn ) Gaussian, LU, QR
Axb
Iterative:
Direct:
Jacobi, Gauss‐Seidel, SOR, GMRES, CG
Problems of the original Newton’s method
Two issues
(i) Initial guess does not guarantee convergence
→ Damped Newton method
(ii) Computation of the Jacobian [f’(xn)]‒1 is costly → Quasi‐Newton method
Convergence of Newton’s method xn1 xn [f(xn)]1f(xn)g(xn)
This successive approximation is equivalent to that used to solving the
fixed point problem:
The convergence condition is
x g(x) sup g(x) 1
xD
Theorem: Let f be twice continuously differentiable. Assume that x* is a zero of f s.t. f’(x*) is nonsingular. Then, the Newton’s method is locally convergent: there exists a neighborhood B of x* s.t. Newton iterations converge to x* for all initial guess x0 B
Newtons’ method is local. What about fixed point method?
n1 n n n1 In damped Newton method,
Damping factor λn > 0
Damped Newton method
Correction to the guess at previous step:
try 1
In Newton’s method,
Δxnxn1xn[f(xn)] f(xn) x x Δx xtry
try 1
xn1 xn [f(xn)] f(xn)
x xλΔx n1 n n n
Initially, λn = 0 (start from n = 0). When ‖f(xn+1)‖ > ‖f(xn)‖, reject this xn+1 and reduce λn (usually halved) until ‖f(xn+1)‖ < ‖f(xn)‖
f(x)
Convergence of Newton’s method
x1
x2 x0
x
f(x)
Convergence of Newton’s method
x1 x1
x
By Newton’s method, when the initial guess x0 is sufficiently close to the zero point, iteration converges; otherwise, not necessarily converges
x2 x0 x0
x1try
x
f(x) f(x0)
Convergence of Newton’s method
f(x1try)
|f(x1try)| > |f(x0)|
In damped Newton method, reject this trial
x0
f(x) f(x0)
x1try half f(x1try)
x1try
x
Convergence of Newton’s method
λ0 := λ0/2 i.e. half x1try – x0 |f(x1try)| < |f(x0)| Accept this trial, let x1 = x1try
x2 x0
xn1 xn Anf(xn) An = [f’(x0)]‒1 which keeps same at all steps
Chord method
For 1 variable:
xn1 xn [f(x0)]1f(xn) xn1 xn [f(x0)]1 f(xn)
Quasi‐Newton method xn1 xn [f(xn)]1f(xn)
Computation of Jacobian f’(xn) at each step is costly Replace [f’(xn)]‒1 by some approximate matrix An:
For 1 variable: replace derivative by difference xn1 xn [f(xn)]1 f(xn)
Quasi‐Newton method
xn1 xn [f(xn)]1f(xn) Computation of Jacobian f’(xn) at each step is costly
Replace [f’(xn)]‒1 by some approximate matrix An: xn1 xn Anf(xn)
Secant method
f(x)f(x ) f(x) n n1 J
nxxn n n1
1 xn1xnJn f(xn)
Quasi‐Newton method
But Jn which satisfies these equations is not unique Jn1 is known from the previous step
Broyden’s method
Secant method for N variables: replace derivative by difference f(xn)(xn xn1)f(xn)f(xn1)
JΔx Δf nnn
JΔx Δf nnn
Find the Jn which is the closest to Jn1 Recall: Frobenius norm
A A
2
min JnJn1 F
ik
F
JnΔxnΔfn i,k1
ΔfJ Δx JJn n1 nΔxT
n n1 ΔxT Δx n nn
1
xn1 xn Jn f(xn)
n 1/2
Damping factor λn > 0 1
1
1 λ Δx J Δf T 1
Quasi‐Newton method
Good Broyden’s method
1
1 1 Δx J Δf T1
Jn Jn1 n n1 nΔxnJn1 ΔxT J1 Δf
xn1 xn Jn f(xn)
Since xn+1 is updated by Jn , it is convenient to directly update Jn
nn1 n 1
1
Damped good Broyden’s method
1
try 1
xn1 xn Jn f(xn)
Δx xtry x J1f(x ) n n1 n n n
x xλΔx n1 n n n
JnJn1n nn1nΔxnJn1
ΔxT J1 Δf nn1 n
Fixed‐point problem:
Equivalent root‐finding problem:
import scipy.optimize
import numpy as np
def f_fp(x):
# function for fixed-point problem
return 0.4 * np.arctan(x)
def f_rf(x):
# function for root-finding problem
return 0.4 * np.arctan(x) – x
Python : Root finding
x 0.4arctan(x) 0.4arctan(x) x 0
print(scipy.optimize.fixed_point(f_fp, [10], xtol=1e-10, maxiter=1000))
sol = scipy.optimize.root(f_rf, [10], method=’hybr’)
# scipy.optimize.root(function, initial guess, method)
# ‘hybr’: hybrid method (default); ‘broyden1′: good broyden print(sol.x)
# return .x: solution; .success: if exit with success
Fixed‐point problem: Root‐finding problem:
x1 0.5cosx1 0.5sinx2 x2 0.5sinx1 0.5cosx2
def f_arr_fp(x_arr):
# function for fixed-point problem
Python : Root finding
0.5cosx1 0.5sinx2 x1 0 0.5sinx1 0.5cosx2 x2 0
return np.array(
[0.5*np.cos(x_arr[0]) – 0.5*np.sin(x_arr[1]),
0.5*np.sin(x_arr[0]) + 0.5*np.cos(x_arr[1])])
def f_arr_rf(x_arr):
# function for root finding problem
return np.array(
[0.5*np.cos(x_arr[0]) – 0.5*np.sin(x_arr[1]) – x_arr[0],
0.5*np.sin(x_arr[0]) + 0.5*np.cos(x_arr[1]) – x_arr[1]])
print(scipy.optimize.fixed_point(f_arr_fp, [0, 0], xtol=1e-10, maxiter=1000))
sol = scipy.optimize.root(f_arr_rf, [0, 0], method=’hybr’) print(sol.x)
System of nonlinear equations
Iterative method
Fixed‐point problem
f(x)x
Recursion formula according to Banach theorem
Root‐finding problem
f(x)0
(i) Newton’s method
(ii) Damped Newton method: convergence (iii) Quasi‐Newton method: speed
Chord method
Secant method
Broyden’s method
Good Broyden’s method Damped good Broyden’s method