程序代写代做代考 C algorithm Numerical Methods & Scientific Computing: lecture notes

Numerical Methods & Scientific Computing: lecture notes
Errors
Error propagation
Week 5: aim to cover
root-finding: bisection, fixed point iteration (Lecture 9) error propagation, bisection, fixed point iteration ( Lab 5) Newton’s method, secant method, fzero (Lecture 10)

Numerical Methods & Scientific Computing: lecture notes
Root-finding
Root-finding methods
Findx suchthatf(x)=0
y
roots
y=f(x)
x

Numerical Methods & Scientific Computing: lecture notes
Root-finding
Iterative processes
‘find zeroes of f ’, ‘find roots of f ’, ‘root-finding’ We look for iterative procedures
guessx0 !x1 !x2···
So we must construct a rule so the sequence of iterates converges to the root x⇤. Since we have to stop the iteration, we get a truncation error (this dominates roundo↵ error until v. close to root)
Most problems of continuous mathematics cannot be solved by finite algorithms.
Trefethen’s Maxims

Numerical Methods & Scientific Computing: lecture notes
Root-finding
Fixed point iteration
Rearrange f (x) = 0 to x = g(x) (not uniquely)
then try the iteration
xn+1 = g(xn)
does it converge to the fixed point? Let’s try it …
Definition
A point x⇤ that satisfies x⇤ = g(x⇤) is a fixed point of the function g

Numerical Methods & Scientific Computing: lecture notes
Root-finding
Behaviour of fixed point iteration
what happens?
1 sometimes it blows up!
2 if it converges , (absolute) error behaves like
en+1 ⇡ ken
linear convergence
3 k is di↵erent for di↵erent g
4 the smaller k is, the faster the convergence

Numerical Methods & Scientific Computing: lecture notes
Root-finding
Explanation by Taylor series …

Numerical Methods & Scientific Computing: lecture notes
Root-finding
Some Theorems!
Definition
Ifg isdefinedon[a,b]andx⇤ =g(x⇤)forsomex⇤ ⇢[a,b],g hasa fixed point x⇤ ⇢ [a,b]
Theorem
Sucient conditions for a unique fixed point
1 Existence: If g 2 C[a,b] and g(x) 2 [a,b] (g maps [a,b] onto [a,b] or a subinterval) then g has a fixed point in [a,b]
2 Uniqueness: If, also, if g0(x) exists on (a,b) and | g0(x) | k < 1 on (a,b) then g has a unique fixed point in [a,b] Numerical Methods & Scientific Computing: lecture notes Root-finding Useful theorems from analysis Theorem Intermediate Value Theorem IVT Givenf 2C[a,b]withf(a)>0,f(b)<0 =) 9c2[a,b]suchthatf(c)=0 1 Theorem Mean Value Theorem MVT Givenf 2C1[a,b] =) 9 c 2 [ a , b ] s u c h t h a t f ( b ) f ( a ) = f 0 ( c ) ba 2 Numerical Methods & Scientific Computing: lecture notes Root-finding Proofs! Proof. Existence: If g(a) = a or g(b) = b fixed point exists, so suppose not. Theng(a)>a,g(b) 0, h(b) < 0 =) byIVT,9c2[a,b]suchthath(c)=0 =) g(c)=c =) c isafixedpointofg. 1 Proof. Uniqueness: We have | g0(x) | k < 1 and suppose p,q are fixed points of g with p 6= q. We prove a contradiction which implies that p = q By MVT, 9c such that g(p) g(q) = g0(c)(p q) =) | p q | = | g ( p ) g ( q ) | = | g 0 ( c ) | | p q |  k | p q | < | p q | which can’t be true, so we have proved that p = q =) fixed point is unique Numerical Methods & Scientific Computing: lecture notes Root-finding Theorem: Convergence of fixed point iteration | g0(x) | k < 1 =) g is a contraction mapping Theorem Under conditions of previous theorem, for any x0 2 [a, b], the sequence xn = g(xn1) converges to the unique fixed point x⇤. Proof. By previous theorem, a unique fixed point exists. Since g maps [a,b] into [a,b], the sequence of iterates {xn} is defined. |xn x⇤ |=|g(xn1)g(x⇤)|=|g0(c)||xn1 x⇤ | k |xn1 x⇤ |k2 |xn2 x⇤ |···kn |x0 x⇤ | so limn!1 | xn x⇤ |=| x0 x⇤ | limn!1 kn = 0 since k < 1 so xn ! x⇤ Numerical Methods & Scientific Computing: lecture notes Root-finding so far so good BUT not cheap to decide when conditions of theorem are met for a given g, this method may not find all roots but this method is used for some dicult problems via the Contraction Mapping Theorem Numerical Methods & Scientific Computing: lecture notes Root-finding Pseudocode input x0 compute x1 while (stopping criterion not satisfied) and (no. of iterations compute next iteration output results what stopping criterion to use? end < Numerical Methods & Scientific Computing: lecture notes Root-finding Stopping criteria Many possible: 1 tolerance on residual of function | f (xn) |< fTol 2 tolerance on absolute change | xn xn1 |< AbsTol 3 tolerance on relative change | xn xn1 | / | xn |< RelTol Numerical Methods & Scientific Computing: lecture notes Root-finding 1isOKiff isnotshallowattheroot 2 OK provided AbsTol not too small usually 3 is safe if RelTol > a few u

Numerical Methods & Scientific Computing: lecture notes
Root-finding
Mixed tolerance
Combine absolute and relative tolerances
1 mixed tolerance | xn xn1 |< AbsTol + RelTol | xn | 2 mixed residual | f (xn) |< AbsfTol + RelfTol | f (x0) | 1 switches between absolute tolerance if xn is small and relative tolerance for|xn |>1
2 uses | f (x0) | to ‘set a scale’ for relative residual

Numerical Methods & Scientific Computing: lecture notes
Root-finding
Bisection
Fixed point iteration is v. simple but not guaranteed to converge! Simplest globally convergent method is bisection aka interval halving Start with an interval [a1, b1] s.t. f (a1), f (b1) of opposite sign
=) a root lies in [a1,b1] by IVT

Numerical Methods & Scientific Computing: lecture notes
Root-finding
Pseudocode
Find midpoint p1 = (a1 + b1)/2 Repeat until convergence the step:
if f(p1) = 0
x* = p1
else ( choose subinterval s.t. root lies there)
if f(p1) and f(a1) have same sign
x* in [p1,b1]
else
x* in [a1,p1]
end
This is just a floating point version of binary search.

Numerical Methods & Scientific Computing: lecture notes
Root-finding
Properties of bisection method
Since we keep x⇤ within the interval at each step, we are guaranteed convergence
We have a precise error bound, since
|pN x⇤ |max(pN aN,bN pN)|bN aN |/2
Interval halves at every step, so we get an extra bit of accuracy every step.
=) For a given tolerance tol, required number of steps N is
known in advance:
2N(ba)log2(ba) tol

Numerical Methods & Scientific Computing: lecture notes
Root-finding
Example
error bound halves at every step (actual error jumps around a bit)

Numerical Methods & Scientific Computing: lecture notes
Root-finding
End of Lecture 9