程序代写代做代考 C go Numerical Methods & Scientific Computing: lecture notes

Numerical Methods & Scientific Computing: lecture notes
IVPs
RK methods
Week 11: aim to cover
Derivation of RK methods
Linear stability of RK methods
Variable time-step RK methods ode23,ode45 Other MATLAB solvers (brief)

Numerical Methods & Scientific Computing: lecture notes
IVPs
RK methods
Systematic derivation
Are there any more such methods? Are they really 2nd order? We look for explicit 2-stage methods of the form:
s1 = f(tn,yn) (1)
s2 = f(tn +c2h,yn +ha21s1) (2) yn+1 = yn + h(b1s1 + b2s2) (3)
which is displayed in a Butcher tableau, after J.Butcher (Auckland) 000
c2 a21 0 b1 b2

Numerical Methods & Scientific Computing: lecture notes
IVPs
RK methods
Extend Taylor series
To find conditions for 2nd order consistency, match the local error from the Taylor series starting from y(tn) = yn :
y(tn+1) = y(tn) + hy0(tn) + 1h2y00(tn) + O(h3) 2
y(tn+1)=yn+hf(tn,yn)+1h2df(t,y(t))|tn +O(h3) 2 dt
y(tn+1)=yn+hf(tn,yn)+1h2[ft+fyy0]|tn +O(h3) 2
y(tn+1)=yn+hf(tn,yn)+1h2[ft+fyf]|tn +O(h3) 2

Numerical Methods & Scientific Computing: lecture notes
IVPs
RK methods
2-stage method
Now compare with our 2-stage method: (expand about (tn,yn)) s1 =fn ⌘f(tn,yn)
so
s2 =fn +ftc2h+fyha21s1 +O(h2) yn+1 = yn + h(b1s1 + b2s2)
yn+1 =yn +hb1fn +hb2(fn +ftc2h+fyha21s1 +O(h2))
= yn + h(b1 + b2)fn + h2(b2c2)ft + h2b2a21fy fn + O(h3) For this to match the Taylor series to O(h3)
y(tn+1)=yn+hfn+1h2[ft+fyf]|tn +O(h3) 2
need to match terms, which gives …

Numerical Methods & Scientific Computing: lecture notes
IVPs
RK methods
Order conditions
b1+b2 = 1
1
b2c2 = b2a21 =
We have 3 equations in 4 unknowns =) a 1-parameter family of methods: Let c2 = ↵
=) a21 =↵,b2 =1/(2↵),b1 =11/(2↵)
Any such method is consistent of order 2, by construction
but disagrees with Taylor series at next term, so only 2nd order
2 1
2

Numerical Methods & Scientific Computing: lecture notes
IVPs
RK methods
RK2
s1 = f(tn,yn)
s2 = f(tn +↵h,yn +↵hs1)
(1) (2)
yn+1 = yn+h((1 1)s1+ 1s2) (3) 2↵ 2↵
Any such method is called a second order (explicit) Runge-Kutta method
RK2
Example
↵ = 1/2 ! explicit midpoint method
↵ = 1 ! modified Euler or explicit trapezoid method ↵ = 1/3 ! RK2 with lowest local error (Heun)

Numerical Methods & Scientific Computing: lecture notes
IVPs
RK methods
Convergence of RK methods
By construction, our RK2 methods are consistent of order 2 Recall the Big Theorem:
Luckily, all RK methods are 0-stable and so All RK methods are convergent
hence RK2 are 2nd order convergent methods
But what about numerical stability (behaviour at finite h)?
Consistency + 0-Stability ! Convergence

Numerical Methods & Scientific Computing: lecture notes
IVPs
RK methods
Linear stability
While convergence proofs are comforting, we actually run ODE codes at finite h. We want numerical solution to have damped errors, when the true solutions are contractive i.e. J < 0. The simplest theory for this is Linear stability Consider an autonomous linear system y10 = Ay1 + b(t); y1(0) = y0 Then a nearby solution with di↵erent IC satisfies y20 =Ay2 +b(t); y2(0)=y0 + The di↵erence z satisfies z0 =Az; z(0)= Numerical Methods & Scientific Computing: lecture notes IVPs RK methods Model equation Assume A is diagonalizable: then A = S⇤S1; i 2 C. So by changing variables we get the system which is decoupled w = S1z w0 = ⇤w wi0 =iwi i 2C y0=y, 2C For linear stability, ask how the method behaves on the model equation i.e. J = since ⇤ is diagonal. This explains why, for linear stability, we use the model scalar equation Numerical Methods & Scientific Computing: lecture notes IVPs RK methods Region of Absolute Stability For contractive solutions, need Re() < 0 For numerical errors to be damped ! we demand | yn+1 |<| yn | yn+1 =yn +hf(tn,yn)=yn +hyn =(1+h)yn We call {h 2 C 😐 1 + h |< 1} the region of absolute stability RAS its intersection with the real axis = (2, 0) is the interval of absolute stability For Euler’s Method: Numerical Methods & Scientific Computing: lecture notes IVPs RK methods RAS for RK2 Do same for RK2: apply method to the model equation s1 = s2 = f(tn,yn)=yn (yn + ↵hs1) = (yn + ↵hyn) yn+h((1 1)s1+ 1s2) 2↵ 2↵ yn+1 = = yn +h((1 1 )yn + 1 (yn +↵hyn)) 2↵ 2↵ = yn[1 + h + 1(h)2] 2 ! Region of absolute stability: | 1 + h + 1 (h)2 |< 1 ! Interval of absolute stability: (2, 0) (again) Moral: Use RK2 for better accuracy, not improved stability Note: since exact solution y(tn+1)=ehyn =[1+h+1(h)2+O(h3)]yn 2 ! RK2 is only 2nd order, not higher 2 Numerical Methods & Scientific Computing: lecture notes IVPs RK methods A-stability Ideally, we would like method numerical errors to be damped whenever solutions are contractive. If region of A-stability includes Re() < 0 (whole LH of complex plane) ! method is A-stable Theorem No explicit RK method is A-stable Proof. The region of absolute stability for any explicit RK method is given by |P(h)| < 1 where P is some polynomial. Since |P(h)| must ! 1 as ! 1 the RAS can never extend to infinity — it must always be a bounded domain. Numerical Methods & Scientific Computing: lecture notes IVPs RK methods RK3 Similarly look for 3rd order methods using 3 stages: s1 = f(tn,yn) (4) s2 = s3 = yn+1 = f(tn +c2h,yn +ha21s1) (5) f(tn +c3h,yn +ha31s1 +ha32s2) (6) yn + h(b1s1 + b2s2 + b3s3) (7) 0000 c2 a21 0 0 c3 a31 a32 0 b1 b2 b3 match with Taylor series ! 3 1-parameter families, all RK3 with 3 stages Numerical Methods & Scientific Computing: lecture notes IVPs RK methods RK4 Similarly look for 4th order methods using 4 stages: 0 1/2 1/2 1/2 0 1/2 1001 1/6 1/3 1/3 1/6 Can go on, but for p > 4 need s > p
Example
classical RK4 (Kutta 1905)
Example
for RK5, need 6 stages

Numerical Methods & Scientific Computing: lecture notes
IVPs
RK methods
E↵ect of roundo↵ for RK methods
For each method, GE = O(hp) after n = T/h steps: truncation error in exact arithmetic
If roundo↵ errors add randomly (as observed): get extra error
⇠ un1/2 ⇠ uh1/2
! optimal h just like numerical di↵erentiation, with
hopt ⇠ u2/(2p+1)
! hopt ⇠ 105 , 104 , 103 for RK3, RK4, RK5 in double precision

Numerical Methods & Scientific Computing: lecture notes
IVPs
RK methods
10−4
10−6
10−8 10−10
10−12
10−14
10−16
effect of roundoff for RK3
1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6
−log10(h)
global error at t=1

Numerical Methods & Scientific Computing: lecture notes
IVPs
RK methods
10−6
10−7
10−8
10−9 10−10
10−11 10−12 10−13 10−14 10−15
effect of roundoff for RK4
1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6
−log10(h)
global error at t=1

Numerical Methods & Scientific Computing: lecture notes
IVPs
RK methods
10−8
10−9 10−10
10−11 10−12 10−13 10−14 10−15 10−16
effect of roundoff for RK5
1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6
−log10(h)
global error at t=1

Numerical Methods & Scientific Computing: lecture notes
IVPs
RK methods
3
2
1
0
−1
−2
RAS: regions growing from RK1−RK4
−3
−5 −4 −3 −2 −1 0 1

Numerical Methods & Scientific Computing: lecture notes
IVPs
RK methods
Summary of RK methods
Method
Order Local error Fevals/step hopt
1 h2 1
2 h3 2
3 h4 3
4 h5 4
5 h6 6
⇠ 1010 ⇠ 5 ⇥ 107
⇠105 ⇠104 ⇠103
Euler = RK1 RK2 RK3 RK4 RK5

Numerical Methods & Scientific Computing: lecture notes
IVPs
RK methods
Variable step methods
So far, everything has been fixed-step. The user has to choose n or h.
Instead, the user should choose a tolerance (absolute or relative) and the method should choose h at each step to achieve an error smaller than the tolerance ! a variable-step method.
The basic problem is that it’s hard to estimate the global error (depends on J, which may change over time) but we can estimate the local error. The idea is to step from tn to tn+1 twice, using 2 methods with di↵erent h (e.g. h and h/2) or di↵erent order. Then from 2 results, estimate the local error and use this to control the stepsize.

Numerical Methods & Scientific Computing: lecture notes
IVPs
RK methods
Embedded Runge-Kutta methods
One clever idea (Fehlberg 1969) is to use 2 RK methods of di↵erent order, with same c, A from the Butcher tableau (same evaluation points) so a lot of the function evaluations are shared between the 2 methods
! saves work!
Example
MATLAB’s ode23,ode45
We use 2 estimates from methods of di↵erent order p, p + 1 e.g. = 2, 3 yp has local error ⇠ Chp+1
n+1
yp+1 has local error ⇠ C ̄hp+2
n+1 ̄p+2 p+1 for usual values of h, Ch ⌧ Ch
p
so we estimate error in yn+1 (the
yp+1 | n+1
worse method) by
and demand that err < Atol err=|yp n+1 Numerical Methods & Scientific Computing: lecture notes IVPs RK methods Local extrapolation If err < Atol, keep that step, using yp+1 (the better estimate) n+1 If not, cut down stepsize h so err < Atol with the new stepsize Control stepsize using error estimate of worse method but keep better estimate — called local extrapolation We hope this local extrapolation makes up for controlling the local error, not the global error, but it’s not guaranteed. Numerical Methods & Scientific Computing: lecture notes IVPs RK methods Rescaling h We want err < Atol and we know err ⇠ hp+1 =) we will achieve the desired tolerance with a new stepsize = qhold , provided But so we choose errnew < Atolerrold err errnew C (qh)p+1 p+1 errold ⇠ Chp+1 = q q = 0.8(Atol)1/(p+1) err where 0.8 is a safety factor to ensure new h is easily small enough. Similar idea for a relative tolerance. Numerical Methods & Scientific Computing: lecture notes IVPs RK methods ode23 For a simplified version, see ode23tx.m and Moler §7.5, 7.6. uses 3rd order 3-stage RK3 and (4-stage!) RK2 which uses s1 , s2 , s3 from RK3 (no extra work) ands4 =f(tn+1,yn+1) Note: s4 7! s1 on next step (First Same As Last) so this costs nothing extra if step is accepted (i.e. most of the time) ! a 3rd order method + error estimator for ⇠ 3 stages of work! In fact, we don’t bother forming yp at all — just form the local error estimator | y3 y2 | Numerical Methods & Scientific Computing: lecture notes IVPs RK methods ode45 uses a 5th order 6-stage RK + 4th order 7-stage First Same As Last RK ! 5th order method + error estimator for ⇠ 6 stages of work! Embedded RK methods are good nonsti↵ 1-step solvers — prob. first methods to try. MATLAB suggests ode45 as the first method to try. Numerical Methods & Scientific Computing: lecture notes IVPs RK methods End of Lecture 21