SEC. 6.1 APPROXIMATING THE DERIVATIVE 323
Central-Difference Formulas
If the function f (x) can be evaluated at values that lie to the left and right of x, then the best two-point formula will involve abscissas that are chosen symmetrically on both sides of x.
Theorem 6.1 (Centered Formula of Order O(h2)). Assume that f ∈ C3[a, b] and that x − h, x, x + h ∈ [a, b]. Then
(3) f′(x)≈ f(x+h)− f(x−h). 2h
Furthermore, there exists a number c = c(x) ∈ [a, b] such that
(4) f′(x)= f(x+h)− f(x−h)+Etrunc(f,h),
where
2h
Etrunc(f,h)=−h2 f(3)(c) = O(h2).
6 The term E( f, h) is called the truncation error.
Proof. Start with the second-degree Taylor expansions f (x) = P2(x)+ E2(x), about x,for f(x+h)and f(x−h):
′
(5) f(x+h)= f(x)+ f (x)h+
and
′
(6) f(x−h)= f(x)− f (x)h+
After (6) is subtracted from (5), the result is
′
(7) f(x+h)− f(x−h)=2f (x)h+
(( f (3)(c1) + f (3)(c2))h3 3! .
f (2)(x)h2
2! +
f (2)(x)h2
2! −
f (3)(c1)h3 3!
f (3)(c2)h3 3! .
324 CHAP. 6 NUMERICAL DIFFERENTIATION
Since f(3)(x) is continuous, the intermediate value theorem can be used to find a
value c so that
(8) f (3)(c1) + f (3)(c2) = f (3)(c).
2
This can be substituted into (7) and the terms rearranged to yield
′ f(x +h)− f(x −h) f(3)(c)h2 (9) f (x)= 2h − 3! .
The first term on the right side of (9) is the central-difference formula (3), the second term is the truncation error, and the proof is complete. •
Suppose that the value of the third derivative f (3)(c) does not change too rapidly; then the truncation error in (4) goes to zero in the same manner as h2, which is ex- pressed by using the notation O(h2). When computer calculations are used, it is not desirable to choose h too small. For this reason it is useful to have a formula for approximating f ′(x) that has a truncation error term of the order O(h4).
Theorem 6.2 (Centered Formula of Order O(h4)). Assume that f ∈ C5[a, b] and that x − 2h, x − h, x, x + h, x + 2h ∈ [a, b]. Then
(10) f ′(x) ≈ − f (x + 2h) + 8 f (x + h) − 8 f (x − h) + f (x − 2h). 12h
Furthermore, there exists a number c = c(x) ∈ [a, b] such that
(11) f′(x)=−f(x+2h)+8f(x+h)−8f(x−h)+f(x−2h)+Etrunc(f,h),
where
12h
Etrunc( f, h) = h4 f (5)(c) = O(h4). 30
Proof. One way to derive formula (10) is as follows. Start with the difference between the fourth-degree Taylor expansions f (x) = P4(x) + E4(x), about x, of f (x + h) and
f (x − h):
(12) f(x+h)− f(x−h)=2f (x)h+ 3! + 5! .
′ 2 f (3)(x)h3 2 f (5)(c1)h5
Then use the step size 2h, instead of h, and write down the following approximation:
′ 16 f (3)(x)h3 64 f (5)(c2)h5 (13) f(x+2h)− f(x−2h)=4f (x)h+ 3! + 5! .
SEC. 6.1 APPROXIMATING THE DERIVATIVE 325 Next multiply the terms in equation (12) by 8 and subtract (13) from it. The terms
involving f (3)(x) will be eliminated and we get
−f(x +2h)+8f(x +h)−8f(x −h)+ f(x −2h)
(14) ′ (16 f (5)(c1) − 64 f (5)(c2))h5 =12f (x)h+ 120 .
If f (5)(x) has one sign and if its magnitude does not change rapidly, we can find a value c that lies in [x − 2h, x + 2h] so that
(15) 16 f (5)(c1) − 64 f (5)(c2) = −48 f (5)(c).
After (15) is substituted into (14) and the result is solved for f ′(x), we obtain
′ −f(x +2h)+8f(x +h)−8f(x −h)+ f(x −2h) f(5)(c)h4 (16) f(x)= 12h + 30 .
The first term on the right side of (16) is the central-difference formula (10) and the second term is the truncation error; the theorem is proved. •
Suppose that | f (5)(c)| is bounded for c ∈ [a, b]; then the truncation error in (11) goes to zero in the same manner as h4, which is expressed with the notation O(h4). Now we can make a comparison of the two formulas (3) and (10). Suppose that f (x) has five continuous derivatives and that | f (3)(c)| and | f (5)(c)| are about the same. Then the truncation error for the fourth-order formula (10) is O(h4) and will go to zero faster than the truncation error O(h2) for the second-order formula (3). This permits the use of a larger step size.
Example 6.2. Let f (x) = cos(x).
(a) Use formulas (3) and (10) with step sizes h = 0.1, 0.01, 0.001, and 0.0001, and cal-
culate approximations for f ′(0.8). Carry nine decimal places in all the calculations. (b) Compare with the true value f ′(0.8) = − sin(0.8).
(a) Using formula (3) with h = 0.01, we get
f ′(0.8) ≈ f (0.81) − f (0.79) ≈ 0.689498433 − 0.703845316 ≈ −0.717344150. 0.02 0.02
Using formula (10) with h = 0.01, we get
f′(0.8)≈ −f(0.82)+8f(0.81)−8f(0.79)+ f(0.78)
0.12
≈ −0.682221207 + 8(0.689498433) − 8(0.703845316) + 0.710913538
0.12
≈ −0.717356108.
(b) The error in approximation for formulas (3) and (10) turns out to be −0.000011941 and 0.000000017, respectively. In this example, formula (10) gives a better approximation to f ′(0.8) than formula (3) when h = 0.01. The error analysis will illuminate this example and show why this happened . The other calculations are summarized in Table 6.2.
SEC. 6.2 NUMERICAL DIFFERENTIATION FORMULAS 339 6.2 Numerical Differentiation Formulas
More Central-Difference Formulas
The formulas for f ′(x0) in the preceding section required that the function can be computed at abscissas that lie on both sides of x, and they were referred to as central- difference formulas. Taylor series can be used to obtain central-difference formulas for the higher derivatives. The popular choices are those of order O(h2) and O(h4) and are given in Tables 6.3 and 6.4. In these tables we use the convention that fk = f (x0 +kh) fork =−3,−2,−1,0,1,2,3.
For illustration, we will derive the formula for f ′′(x) of order O(h2) in Table 6.3. Start with the Taylor expansions
(1) f(x+h)= f(x)+hf′(x)+h2f′′(x)+h3f(3)(x)+h4f(4)(x)+··· 2 6 24
Central-Difference Formulas of Order O(h2) f′(x0)≈ f1 − f−1
2h
f′′(x0)≈ f1−2f0+ f−1
h2
f(3)(x0)≈ f2 −2f1 +2f−1 − f−2
2h3
f(4)(x0)≈ f2 −4f1 +6f0 −4f−1 + f−2
h4
Central-Difference Formulas of Order O(h4) f′(x0)≈−f2+8f1−8f−1+ f−2
12h f′′(x0)≈−f2+16f1−30f0+16f−1− f−2
12h2 f(3)(x0)≈−f3+8f2−13f1+13f−1−8f−2+ f−3
8h3 f(4)(x0)≈−f3+12f2−39f1+56f0−39f−1+12f−2− f−3
6h4
Table 6.3
Table 6.4
340 CHAP. 6 NUMERICAL DIFFERENTIATION and
(2) f(x−h)= f(x)−hf′(x)+h2f′′(x)−h3f(3)(x)+h4f(4)(x)−···. 2 6 24
Adding equations (1) and (2) will eliminate the terms involving the odd derivatives f′(x), f(3)(x), f(5)(x),…:
(3) f(x+h)+ f(x−h)=2f(x)+2h2f′′(x)+2h4f(4)(x)+···. 2 24
Solving equation (3) for f ′′(x) yields
(4)
f′′(x)= f(x+h)−2f(x)+f(x−h)−2h2f(4)(x) h2 4!
− 2h4 f(6)(x) −···− 2h2k−2 f(2k)(x) −··· . 6! (2k )!
If the series in (4) is truncated at the fourth derivative, there exists a value c that lies in [x − h, x + h], so that
(5) f′′(x0)= f1 −2f0 + f−1 − h2 f(4)(c). h2 12
This gives us the desired formula for approximating f ′′(x): (6) f′′(x0)≈ f1 −2f0 + f−1.
h2
(a) Use formula (6) with h = 0.1, 0.01, and 0.001 and find approximations to f ′′(0.8).
Example 6.4. Let f (x) = cos(x).
Carry nine decimal places in all calculations.
(b) Compare with the true value f ′′(0.8) = − cos(0.8). (a) The calculation for h = 0.01 is
f′′(0.8)≈ f(0.81)−2f(0.80)+ f(0.79) 0.0001
≈ 0.689498433 − 2(0.696706709) + 0.703845316 0.0001
≈ −0.696690000.
(b) The error in this approximation is −0.000016709. The other calculations are summa- rized in Table 6.5. The error analysis will illuminate this example and show why h = 0.01 was best.
SEC. 6.2
NUMERICAL DIFFERENTIATION FORMULAS 341
Table 6.5
Example 6.4
Step size
h = 0.1
h = 0.01 h = 0.001
Numerical Approximations to f ′′(x) for Error using
formula (6)
−0.000580409 −0.000016709 −0.000706709
Approximation by formula (6)
−0.696126300 −0.696690000 −0.696000000
Error Analysis
Let fk = yk + ek , where ek is the error in computing f (xk ), including noise in mea- surement and round-off error. Then formula (6) can be written
(7) f′′(x0)= y1−2y0+y−1 +E(f,h). h2
The error term E(h, f ) for the numerical derivative (7) will have a part due to round- off error and a part due to truncation error:
(8) E(f,h)=e1−2e0+e−1 −h2f(4)(c). h2 12
If it is assumed that each error ek is of the magnitude ε, with signs that accumulate errors, and that | f (4)(x)| ≤ M, then we get the following error bound:
4ε Mh2 (9) |E(f,h)|≤h2+ 12.
If h is small, then the contribution 4ε/h2 due to round-off error is large. When h is large, the contribution Mh2/12 is large. The optimal step size will minimize the quantity
4ε Mh2 (10) g(h)=h2+ 12.
Setting g′(h) = 0 results in −8ε/h3 + Mh/6 = 0, which yields the equation h4 = 48ε/M, from which we obtain the optimal value:
48ε 1/4 (11) h= M .
When formula (11) is applied to Example 6.4, use the bound | f (4)(x)| ≤ | cos(x)| ≤ 1 = M and the value ε = 0.5×10−9. The optimal step size is h = (24×10−9/1)1/4 = 0.01244666, and we see that h = 0.01 was closest to the optimal value.
342 CHAP. 6 NUMERICAL DIFFERENTIATION
Since the portion of the error due to round off is inversely proportional to the square of h, this term grows when h gets small. This is sometimes referred to as the step-size dilemma. One partial solution to this problem is to use a formula of higher order so that a larger value of h will produce the desired accuracy. The formula for f ′′(x0) of order O(h4) in Table 6.4 is
(12) f′′(x0)= −f2 +16f1 −30f0 +16f−1 − f−2 +E(f,h). 12h2
The error term for (12) has the form
(13) E(f,h)= 16ε + h4 f(6)(c),
3h2 90
where c lies in the interval [x − 2h, x + 2h]. A bound for |E( f,h)| is
(14) |E(f,h)|≤ 16ε + h4M, 3h2 90
where | f (6)(x)| ≤ M. The optimal value for h is given by the formula 240ε 1/6
(15) h= M . Example 6.5. Let f (x) = cos(x).
(a) Use formula (12) with h = 1.0, 0.1, and 0.01 and find approximations to f ′′(0.8). Carry nine decimal places in all the calculations.
(b) Compare with the true value f ′′(0.8) = − cos(0.8).
(c) Determine the optimal step size.
(a) The calculation for h = 0.1 is
f ′′(0.8)
≈ −f(1.0)+16f(0.9)−30f(0.8)+16f(0.7)− f(0.6)
0.12
≈ −0.540302306 + 9.945759488 − 20.90120127 + 12.23747499 − 0.825335615
0.12
≈ −0.696705958.
(b) The error in this approximation is −0.000000751. The other calculations are summa- rized in Table 6.6.
(c) When formula (15) is applied, we can use the bound | f (6)(x)| ≤ | cos(x)| ≤ 1 = M and the value ε = 0.5×10−9. These values give the optimal step size h = (120×10−9/1)1/6 = 0.070231219.
Numerical Methods Using Matlab, 4th Edition, 2004
John H. Mathews and Kurtis K. Fink ISBN: 0-13-065248-2
Prentice-Hall Inc.
Upper Saddle River, New Jersey, USA http://vig.prenhall.com/