Numerical Methods & Scientific Computing: lecture notes
Data fitting
Week 10: aim to cover
Euler’s method: derivation
Euler’s method: performance and error analysis Runge-Kutta methods
Numerical Methods & Scientific Computing: lecture notes
IVPs
Solving di↵erential equations
One of the most useful numerical techniques: numerical solution of di↵erential equations . We do only Ordinary Di↵erential Equations (ODEs)
=) unknown functions yi(t) depend only on 1 dependent variable t. Recall that any nth order ODE can be written as a system of 1st order ODEs
=) any system of ODEs can be written as a system of 1st order ODEs.
dyi dt
or in vector form
= fi(t,y1,···yn) i = 1..n
dy =f(t,y) dt
Numerical Methods & Scientific Computing: lecture notes
IVPs
Example:
Numerical Methods & Scientific Computing: lecture notes
IVPs
Special cases
If f = f(y) only, system is autonomous (else nonautonomous) If f = A(t)y + b(t), system is linear
If b = 0, linear system is homogeneous
If A is constant, linear system is constant-coe cient system If only 1 ODE, equation is scalar
Numerical Methods & Scientific Computing: lecture notes
IVPs
Initial Value problems
If the initial conditions
are given at a single value of t (e.g. t0 = 0), we have an Initial Value
yi(t0) = ↵i
Problem. Otherwise, we have a Boundary Value Problem.
We only cover Initial Value Problems (IVPs).
The methods used for a system are basically the same as those for a single 1st order ODE
=) we lose little by discussing a scalar ODE
dy =f(t,y)wherey(t0)=y0 dt
Numerical Methods & Scientific Computing: lecture notes
IVPs
Finding a numerical solution means :
find a set of values {yk } at some set of output points {tk }. We need to distinguish between the true value: y(tk)
and the numerical approximation : yk
We need numerical methods since most ODEs can’t be solved analytically
Example
y0 = y3 + t2 can’t be done by Maple.
Numerical Methods & Scientific Computing: lecture notes
IVPs
Local Existence and uniqueness
Theorem
If f (t , y ) is cts in t 2 [t0 , tf ] and y 2 R and satisfies a Lipschitz condition
| f ( t , y ) f ( t , yˆ ) | L | y yˆ |
=) the IVP has a unique solution for t 2 [t0,t1], t1 tf .
If @f continuous =) satisfies Lipschitz condition, with L = sup | @f | @y @y
The Lipschitz property is stronger than continuity of f(t,y) but weaker than the continuity of @f
@y
This theorem guarantees a unique solution, at least for a while after t0.
We will assume f satisfies a Lifschitz condition.
Numerical Methods & Scientific Computing: lecture notes
IVPs
Sensitivity of IVP
Suppose we change IC from y0 to y0 + ✏? What is change in solution, measured with the function 1 norm?
||y✏(t) y(t)||1 ⌘max|y✏(t) y(t)| Under the same assumptions as above, can show that
||y✏(t) y(t)||1 c✏
where c is independent of ✏.
If c is not too large, the IVP is well-conditioned; if c 1 it is ill-conditioned.
Numerical Methods & Scientific Computing: lecture notes
IVPs
Roughly, when nearby solutions are diverging rapidly from each other, @f 1
@y
the IVP is ill-conditioned: no numerical method will give accurate answers.
When nearby solutions approach each other,
@f <0 @y
the IVP is well-conditioned: we aim for accurate answers in this case.
Example
for autonomous linear system
solutions approach each other if Re( max (A)) < 0 all eigenvalues lie in left half of complex plane
Warning: doesn’t generalize immediately to nonautonomous or nonlinear cases!
Numerical Methods & Scientific Computing: lecture notes
IVPs
Time-stepping
All methods start at t = t0 using the Initial Condition y = y0 then march a distance h in t
t0 ! t0 + h ⌘ t1
y0 !y1 ⇡y(t1)
At t = t1 we have another IVP,
dy =f(t,y)
so just repeat this procedure until t = tf . Called time-stepping.
dt
y(t1) = y1
Numerical Methods & Scientific Computing: lecture notes
IVPs
Types of time-stepping
If h is constant ! fixed step method
If h is changed from step to step ! variable step method
If, to get from tn to tn+1 we only use yn ! 1 step method
If, to get from tn to tn+1 we use previous values yn 1,yn 2.. ! multistep method
We start with fixed step methods but modern codes are usually variable step. What method to choose depends on:
discretization error
stability properties of method
e ciency e.g. number of function evaluationss ease of use
Numerical Methods & Scientific Computing: lecture notes
IVPs
In MATLAB
it’s very easy to solve an IVP
1 definetheODEy0=f(t,y)
e.g. myde = @(t,y) y.^2 + t.^3 must be in order (t,y)!!
2 define the Initial Condition
e.g. y0 = 0.5
3 then solve with one of MATLAB’s solvers e.g. [t,y] = ode45(myde,[0,1],y0); MAGIC
plot the result
plot(t,y)
We now explore how this magic is performed ...
Numerical Methods & Scientific Computing: lecture notes
IVPs
Euler’s method
The simplest method: Euler’s Method
Startwithy0. Weknowy0(t0)=f(t0,y0)sostepadistancehwiththat slope:
y1 =y0 +hf(t0,y0) Now repeat, using in general
Euler’s Method
yn+1 =yn +hf(tn,yn)
We are approximating the ODE by solving a di↵erence equation ! discretization error is produced
Numerical Methods & Scientific Computing: lecture notes
IVPs
Euler’s method
Derivation using Taylor series
Taylor series ! local error ⇠ h2
this idea of matching with Taylor series leads to Runge-Kutta methods (1-step methods)
Numerical Methods & Scientific Computing: lecture notes
IVPs
Euler’s method
Derivation by approximating y0
Use Forward Di↵erence to approximate y0
! local error ⇠ h2 (as before)
This idea of approximating y0 leads to BDF methods (multistep methods)
Numerical Methods & Scientific Computing: lecture notes
IVPs
Euler’s method
Derivation by quadrature
Z tn+1 tn
use Left Hand rectangle rule! ! local error ⇠ h2
Approximating this integral leads to Adams methods (multistep methods) The local error is a truncation error or discretization error
y(tn+1) y(tn) =
f(⌧,y(⌧)) d⌧
Numerical Methods & Scientific Computing: lecture notes
IVPs
Euler’s method
How does it perform?
• Demo
Here we know the exact answers so we plot the global error as a function of time (on a semilog plot) for three di↵erent problems, for 3 choices of h.
Numerical Methods & Scientific Computing: lecture notes
IVPs
Euler’s method
Some observations from the results...
1 sometimes the error grows with t, sometimes not
2 the global error y (tk ) yk appears to be / h (if the method works
at all)
3 sometimes it blows up, if stepsize is too large
To understand these observations, we need some error analysis.
Numerical Methods & Scientific Computing: lecture notes
IVPs
Euler’s method
End of Lecture 19