Numerical Methods & Scientific Computing: lecture notes
IVPs
Euler’s method
Truncation Errors in Euler’s Method
Really want to understand global error: the (absolute) error after k timesteps
GEk ⌘y(tk) yEM k
UseTaylorSeriesony(tk+h),assumingy2C2 =)
y(tk+1) = y(tk) + hy0(tk) + h2y00(⇠)/2 (1)
= y(tk)+hf(tk,y(tk))+h2y00(⇠)/2 (2) yk+1 =yk +hf(tk,yk) (3) GEk+1 =GEk +h[f(tk,y(tk)) f(tk,yk)]+h2y00(⇠)/2 (4)
Euler’s Method is just
(2)-(3) =)
Numerical Methods & Scientific Computing: lecture notes
IVPs
Euler’s method
Mean Value Theorem in second argument =)
f(tk,y(tk)) f(tk,yk) = @f |⇠ (y(tk) yk) (5)
or could use Lipshitz bound
f (tk , y (tk )) f (tk , yk ) L GEk
(4) and (6) =)
GEk+1 = (1 + hJ)GEk + h2y00(⇠)/2 (7)
or
GEk+1 (1 + hL)GEk + h2y00(⇠)/2
@y
⌘ J(⇠)GEk (6)
Numerical Methods & Scientific Computing: lecture notes
IVPs
Euler’s method
Error propagation
In words, this di↵erence equation says
Global error at tk+1 = the Global error at tk, propagated with factor (1+hJ)
plus the new error ( the local error) made at each step
(1 + hJ) determines the error propagation at finite h
! the numerical stability of method cf. BadRecurrence.m We want numerical errors to dampen out, if possible,
=) wewant|1+hJ|<1
Note: since h > 0, errors must grow if J > 0
Numerical Methods & Scientific Computing: lecture notes
IVPs
Euler’s method
Diverging solutions
If J > 0 =) nearby solutions are diverging
Absolute errors will grow in a region where J > 0
If J is not too large, IVP may still be fairly well-conditioned.
If the solution is also growing, relative errors may be OK, even if absolute errors are growing
=) may get useful information from numerical solution
If J 0, IVP is ill-conditioned
=) don’t expect accurate numerical solution, using any method!
More precisely: if J(tf t0) 1
Numerical Methods & Scientific Computing: lecture notes
IVPs
Euler’s method
Contractive solutions
If J < 0 =) nearby solutions are contractive
In this case, the local error made at each step is damped thereafter Sketch:
We aim for accurate solutions in this case.
Numerical Methods & Scientific Computing: lecture notes
IVPs
Euler’s method
Convergence of Euler’s Method
Convergence determined by what happens to the error as h ! 0 Numerical stability determined by what happens to the error at finite h.
WewouldliketoprovethatGEk !0ash!0. i.e. that the Forward Error ! 0 as h ! 0.
We do it in 2 steps:
1 show the Backward Error (residual) ! 0 as h ! 0 : consistency
2 show the condition number of the di↵erence equation stays
bounded as h ! 0 : 0-stability
then use a Big Theorem
that is true for ODE methods and time-dependent linear PDE methods! For ODEs: any sensible method is consistent but not all are 0-stable!
Consistency + Stability ! Convergence
Numerical Methods & Scientific Computing: lecture notes
IVPs
Euler’s method
Consistency
A method is consistent if the exact solution satisfies the di↵erence equation in the limit h ! 0, n ! 1 with t = t0 + nh fixed.
To check this, we find the residual R by subsitituting the exact solution into the method:
for Euler’s Method
R =y(tn+1) y(tn) hf(tn,y(tn))= 1h2y00(⇠) 2
R is called the local truncation error LTE.
For 1-step methods, the LTE is the same as the Local Error.
Numerical Methods & Scientific Computing: lecture notes
IVPs
Euler’s method
Backward error
We want this residual to measure the backward error: is the exact
solution a solution of the di↵erence equation with a di↵erent f ?
The answer is YES if we replace f with f + R
— so the backward error is really R/h.
h
Definition
A method is consistent if limh!0 R = 0. h
In our case, R/h ⇠ h ! 0 =) Euler’s Method is consistent. =) Euler’s Method is consistent of order 1.
Definition
Amethodisconsistentoforderp if R =O(hp)ash!0 h
Numerical Methods & Scientific Computing: lecture notes
IVPs
Euler’s method
0-stability
Check that the di↵erence equation doesn’t go ill-conditioned as h ! 0. Suppose we change initial condition by 0 and at step n change the right
hand side (RHS) by n,
un+1 =un +hf(tn,un);u0 =A
vn+1 =vn +h[f(tn,vn)+ n];v0 =A+ 0 is the change in the solution v u finite?
Can show (by induction) that
|v u|n eLT 0+eLT 1
L
where T = tn t0, assuming | k | no h dependence!
=) change remains finite as h ! 0 so Euler’s Method is 0-stable.
Numerical Methods & Scientific Computing: lecture notes
IVPs
Euler’s method
Hence convergence
Now combine these two results to show that |GEn| ! 0 as h, 0 ! 0.
Definition
A method is convergent of order p if max|GEn| = O(hp) as h, 0 ! 0
Theorem
A method that is consistent of order p and 0-stable is convergent of order p.
Sketch of proof:
the backward error R/h plays the role of the perturbation . Therefore if the discrete condition number stays finite (0-stability) and R/h = O(hp) as h ! 0 (consistent), then as h, 0 ! 0 =) |GEn| = O(hp) (convergent)
Numerical Methods & Scientific Computing: lecture notes
IVPs
Euler’s method
An error bound
For Euler’s Method, R/h = 1 hy00(⇠)
1 002 2
=) =2hMwhere|y (⇠)|M,assumingy2C .
So,for 0 !0
| GEn | hM [eLT 1] = Kh
2L
an a priori error bound! i.e. global error is O(h)
Euler’s Method is convergent of order 1 (a ‘first order method’)
Hence if we halve h, expect GE to halve, as we observed.
The bound Kh depends sensitively on LT— this bound is rigorous but very pessimistic: doesn’t distinguish between J 0 and J ⌧ 0
(both have same Lipshitz constant L = |J|)
Numerical Methods & Scientific Computing: lecture notes
IVPs
Euler’s method
A tighter bound
We can do better if J < 0 (solutions are contractive) | GEn | hM T = Kh
2
provided h satisfies
i.e. numerical errors are damped
BUT if J < 0 and | 1 + hJ |> 1
we get numerical error growth for a well-conditioned IVP! — a classic case of numerical instability
| 1 + hJ |< 1
Numerical Methods & Scientific Computing: lecture notes
IVPs
Euler’s method
Interval of absolute stability
The interval where Euler Method is numerically stable is called the
interval of absolute stability
=)
| 1 + hJ |< 1 2