程序代写 Chapter 11

Chapter 11
Polynomial Interpolation
Goals of this chapter
• Introduce three methods for computing polynomial inter- polants.

Copyright By PowCoder代写 加微信 powcoder

• Discuss the error in interpolation.
• Describe more advanced interpolation techniques based on a special choice of points and on using the derivatives.
Polynomial interpolants are rarely the end product of a numerical process. Their importance is more as building blocks for other, more complex algorithms in differentiation, integration, solution of differential equations, approximation theory at large, and other areas. Hence, polyno- mial interpolation arises frequently, indeed it is one of the most ubiquitous tasks, both within the design of numerical algorithms and in their analysis. Its importance and centrality help explain the considerable length of the present chapter.
This chapter starts in Section 11.1 with a general description of approximation processes in one independent variable, arriving at polynomial interpolation as one such fundamental family of techniques. In Sections 11.2, 11.3 and 11.4 we shall see no less than three different forms (different bases) of interpolating polynomials. They are all of fundamental importance and are used extensively in the practical construction of numerical algorithms.
Estimates and bounds for the error in polynomial interpolation are derived in Section 11.5. If the choice of locations for the interpolation data is up to the user then a special set of abscissae (nodes) called Chebyshev points is an advantageous choice, and this is discussed in Section 11.6. Finally, Section 11.7 considers the case where not only function values but also derivative values are available for interpolation.
11.1 General approximation and interpolation
Interpolation is a special case of approximation. In this section we consider different settings in which approximation problems arise, explain the need for finding approximating functions, describe a general form for interpolants and important special cases, and end up with polynomial interpolation.

424 Chapter 11: Polynomial Interpolation
0 01234567
0 01234567
(a) reasonable (b) unreasonable
Figure 11.1. Different interpolating curves through the same set of points.
Discrete and continuous approximation in 1D
It is possible to distinguish between approximation techniques for two types of problems: 1. Data fitting (Discrete approximation problem):
Given a set of data points {(xi, yi)}ni=0, find a reasonable function v(x) that fits the data points.
If the data are accurate it might make sense to require that v(x) interpolate the data, i.e., that the curve pass through the data exactly, satisfying
v(xi) = yi, i = 0,1,…,n.
See Figure 11.1.
2. Approximating functions:
For a complicated function f(x) (which may be given explicitly, or only implicitly), find a simpler function v(x) that approximates f(x).
For instance, suppose we need to quickly find an approximate value for sin(1.2) (that’s 1.2 in radians, not degrees) with only a primitive calculator at hand. From basic trigonometry we know the values of sin(x) for x = 0, π/6, π/4, π/3 and π/2: how can we use these to estimate sin(1.2)?
For another instance, suppose we have a complex, expensive program that calculates the final point (say, the landing location) of the trajectory of a space shuttle for each given value of a certain control parameter. We perform this calculation, i.e., invoke the program, for several parameter values. But then, we may want to use this computed information to have an idea of the resulting landing location for other values of the control parameter – without resorting to the full calculation for each parameter value.
Interpolation techniques for such function approximation are identical to those of data fitting once we specify the data points {(xi , yi = f (xi ))}ni=0 . The difference between function interpolation and data fitting interpolation is that in the former we
• have some freedom to choose xi cleverly, and
• may be able to consider the global interpolation error.

Chapter 11: Polynomial Interpolation 425
The need for interpolation
Why do we want to find an approximating function v(x) in general?
• For prediction: we can use v(x) to find approximate values of the underlying function at
locations x other than the data abscissae x0 , . . . , xn .
If x is inside the smallest interval containing all the data abscissae then this is called in- terpolation; if x is outside that interval then we have extrapolation. For instance, we may have data regarding the performance of some stock at each trading week’s end during the past year. Interpolating for the value of the stock during other days of last year is a rel- atively safe undertaking. Extrapolating for the value of the stock sometime next year is much more risky (although potentially more interesting).
• For manipulation: an instance is finding approximations for derivatives and integrals of the underlying function.
The interpolating function must be not only easy to evaluate and manipulate, but also “rea- sonable”; i.e., it must resemble a curve that we would actually draw through the points – see Figure 11.1. Just exactly what qualifies as reasonable is context-dependent and hard to define in precise terms for general purposes.
Interpolants and their representation
We generally assume a linear form for all interpolating (or more generally, approximating) func- tions v(x)54. Thus we write
where {cj }nj =0 are unknown coefficients, or parameters determined from the data, and {φj (x)}nj =0 are pre-determined basis functions. These basis functions are further assumed to be linearly in- dependent, which means that if coefficients {cj}nj=0 are found such that v(x) = 0 for all x, then all these coefficients themselves must vanish: cj = 0, j = 0, 1, . . . , n.
Notice our default assumption that the number of basis functions equals the number of data points n + 1. If there are fewer basis functions than data then we cannot hope for interpolation of all data values and we typically end up resorting to a least squares approach as in Chapter 6. Other techniques of approximating functions which do not fit the data exactly are considered in Chapters 13 and 14.
In general, the interpolation conditions can be written as n + 1 linear relations for the n + 1 unknown coefficients. The resulting linear system of equations is
cjφj(x) = c0φ0(x) + ··· + cnφn(x),
φ(x) φ(x) φ(x) ··· φ(x)c y 001020n000
φ0(x1)  .
φ1(x1) . φ1(xn)
φ2(x1) . φ2(xn)
· · · · · ·
φn(x1) c1  y1  .  .  =  . .
φn(xn) cn yn
We may not actually directly form and solve the system (11.1) in a given situation, but it is always there as a result of the assumption of linear representation of v(x) in terms of the basis functions.
Here are some common examples of interpolants:
54Note that the form of the interpolating function is linear in the sense that it is a linear combination of basis functions in some appropriate space, not that v(x) itself is a linear function of x.

Chapter 11: Polynomial Interpolation
In the present chapter we consider polynomial interpolation 􏰘n
φj(x) = xj , j = 0,1,…n, but we will see other choices as well.
In the next chapter we discuss piecewise polynomial interpolation, which is based on per- forming polynomial interpolation in “pieces”, rather than on the entire given interval.
Trigonometric interpolation is also extremely useful, especially in signal processing and for describing wave and other periodic phenomena. For instance, consider
φj(x) = cos(jx) j = 0,1,…,n. We elaborate on more general choices in this spirit in Chapter 14.
cjxj = c0 +c1x1 +···+cnxn.
This simplest and most familiar form of representing a polynomial implies the choice of a
monomial basis
In general, it is important to distinguish two stages in the interpolation process:
1. Constructing the interpolant. These are operations that are independent of where we would then evaluate v(x). An instance of this is determining the coefficients c0, c1, . . . , cn for a given basis φ0(x), φ1(x), . . . , φn(x).
2. Evaluating the interpolant at a given point x.
The interpolant construction is done once for a given set of data. After that, the evaluation may
be applied many times.
Polynomial interpolation
The main reason why polynomial approximation is desirable is its simplicity. Polynomials • are easy to construct and evaluate (recall also the nested form from Example 1.5);
• are easy to sum and multiply (and the result is also a polynomial);
• are easy to differentiate and integrate (and the result is also a polynomial); and • have widely varying characteristics despite their simplicity.
Note: The rest of this chapter is devoted exclusively to polynomial interpolation.

Chapter 11: Polynomial Interpolation 427
11.2 Monomial interpolation
Let us denote a polynomial interpolant of degree at most n by 􏰘n
For n + 1 data points
p(x) = pn(x) =
cjxj = c0 +c1x+···+cnxn. (x0, y0), (x1, y1), . . . , (xn, yn),
we want to find n + 1 coefficients55 c0,c1,…,cn such that p(xi) = yi, i = 0,1,…,n.
We will assume, until Section 11.7, that the abscissae of the data points are distinct, meaning
xi ̸= xj whenever i ̸= j.
Example 11.1. Let n = 1 and let our two data points be (x0, y0) = (1, 1) and (x1, y1) = (2, 3).
We want to fit a polynomial of degree at most 1 of the form
p1(x) = c0 + c1x
through these two points.
The interpolating conditions are
p1(x0)=c0 +1c1 =1, p1(x1)=c0 +2c1 =3.
These are two linear equations for the two unknowns c0 and c1, which can be solved using high school algebra techniques. We obtain c0 = −1 and c1 = 2, so
p1(x) = 2x − 1.
This linear interpolant is depicted in Figure 11.2.
Next, let n = 2 and let our three data points be (1, 1), (2, 3), and (4, 3). The first two are
the same as above, but the third data pair specifies a significantly different value at x = 4 than what p1(x) predicts: while p1(4) = 7, here we seek a polynomial whose value at x = 4 equals 3. Thus, we want to fit a polynomial of degree at most 2 of the form
p2(x) = c0 + c1x + c2x2
through these three points. Note that the coefficients c0 and c1 in p2(x) are not expected to be
the same as for p1(x), in general. The interpolating conditions are
p2(x0)=c0 +1c1 +1c2 =1, p2(x1)=c0 +2c1 +4c2 =3, p2(x2) = c0 +4c1 +16c2 = 3.
This is a 3 × 3 linear system for the unknown coefficients cj , which in matrix form reads 
111c0 1 1 2 4c1=3.
1416c2 3 55Remember that a polynomial of degree n has n + 1, not n, coefficients.
The MATLAB commands

Chapter 11: Polynomial Interpolation
A = [1 1 1; 1 2 4; 1 4 16]; y = [1; 3; 3];
yield (of course, up to a rounding error)
p2(3) = 11, 3
which is quite lower than the value p1(3) = 5. Observe also the different values of the two polynomials at x = 1.5 in Figure 11.2, as the pair of close but not coinciding magenta symbols
illustrates.
c1 =4, c2 =−23. p2(x) = (−2×2 + 12x − 7)/3,
This completes the construction of the quadratic interpolant. The desired interpolating polyno-
mial p2 is
and this can be evaluated for any given value of x. For instance, at x = 3 we have
8 7 6 5 4 3 2 1 0
−2 0123456
p (x) = 2x−1 1
p (x) = (−2x +12x−7)/3
Unique interpolating polynomial
Figure 11.2. Quadratic and linear polynomial interpolation.
Generalizing the example above to the case of n + 1 data points, the interpolation conditions lead to a system of (n + 1) linear equations in the (n + 1) unknowns c0,c1,…,cn given by
1 x1 x2 ··· xnc y 00000
1 x11 x12 ··· x1nc1 y1 . . . .  .  =  . .
1 xn1 xn2 ··· xnn cn yn

Chapter 11: Polynomial Interpolation 429
Product Notation
Let z1, z2, . . . , zn be n scalar values. We are used to the notation for their sum, given by
Σni=1zi =z1 +z2 +…+zn. Analogously, the notation for their product is
Πni=1zi = z1 · z2 · · · zn.
Theorem: Polynomial Interpolant Unique Existence.
For any real data points {(xi , yi )}ni=0 with distinct abscissae xi there exists a unique polynomial p(x) of degree at most n which satisfies the interpolation conditions
p(xi) = yi, i = 0,1,…,n.
The coefficient matrix X is called a Vandermonde matrix; in particular, it is known from an introductory linear algebra text that
det(X)= (xj−xi) , i=0 j =i+1
i.e., this determinant is the product of all possible differences in the xi. So, as long as the abscissae are distinct, det(X) ̸= 0 and hence X is nonsingular.56 This argument provides a simple proof to the theorem given on the current page that there exists a unique interpolating polynomial p. The uniqueness of polynomial interpolation is particularly important because of the different forms that this polynomial can take: the same result is obtained, no matter which method or basis are used to obtain the interpolating polynomial.
Later on, in Section 11.7, we will generalize this existence and uniqueness result for the case when data abscissae are not necessarily distinct.
Using the monomial basis for constructing interpolants
Our discussion so far not only shows uniqueness but also suggests a general way for obtaining the interpolating polynomial p(x): form the Vandermonde matrix and solve a linear system of equations. The big advantage of this approach is its intuitive simplicity and straightforwardness. However, if we consider a general-purpose use of polynomial interpolation then this approach has disadvantages:
1. The calculated coefficients cj are not directly indicative of the interpolated function, and they may completely change if we wish to slightly modify the interpolation problem; more on this in Sections 11.3 and 11.4.
56 It also verifies our assumption that the basis functions, in this case the monomials φj (x) = xj , j = 0, 1, . . . , n, are linearly independent, because the matrix is nonsingular for an arbitrary choice of distinct points x0 , x1 , . . . , xn .

Chapter 11: Polynomial Interpolation
The Vandermonde matrix X is often ill-conditioned, so the coefficients thus determined are prone to inaccuracies.
This approach requires about 23 n3 operations (flops) to carry out Gaussian elimination (see Section 5.1) for the construction stage; another method exists which requires only about n2 operations. The evaluation stage, however, is as quick as can be; using the nested form, it requires about 2n flops per evaluation point.
There are situations in which the last two disadvantages are not important. First, the higher computational cost, if there is any, is important only when n is “large”, not when it equals 2 or 3. Also, the ill-conditioning, i.e., the claim that the basis φj (x) = xj , j = 0, 1, . . . , n, is “not good” in the sense that roundoff error gets unreasonably magnified, occurs mostly when the interval of interpolation is wide or n is not small. If all points of concern, including all data points xi and evaluation points x, are in a small interval near, say, some point xˆ, then the basis
{1,(x − xˆ),(x − xˆ)2,(x − xˆ)3}
is perfectly reasonable for a cubic polynomial. More on this in Section 12.2. The polynomial
p(x) = c0 +c1(x−xˆ)+···+cn(x−xˆ)n is, in fact, reminiscent of a Taylor series expansion (see page 6)
f(x)=f(xˆ)+f′(xˆ)(x−xˆ)+···+ f(n)(xˆ)(x−xˆ)n +Rn(x), n!
where the remainder term can be written as
Rn(x) = f(n+1)(ξ(x))(x − xˆ)n+1 (n + 1)!
for some ξ between x and xˆ.
Throughout most of this chapter we consider values of n in the single decimal digit territory.
Larger polynomial degrees appear only in Section 11.6, where the monomial basis is indeed inadequate.
There are several additional desirable features, often far more important than the efficiency considerations mentioned above, which the simple monomial basis {φj (x) = xj } does not have. These are introduced in the next two sections, together with bases that do possess such features, and which allow a more intuitive understanding in context of both the problems and their resolu- tions.
Specific exercises for this section: Exercises 11.1–11.3. 11.3 Lagrange interpolation
The coefficients cj of the polynomials p1(x) and p2(x) in Example 11.1 do not relate directly to thegivendatavaluesyj.Itwouldbeniceifapolynomialbasisisfoundsuchthatcj =yj,giving
p(x) = pn(x) =
yj φj (x).

Chapter 11: Polynomial Interpolation 431
Such a representation would be particularly easy to manipulate, for instance, when we seek for- mulas for differentiation or integration. This is shown in later chapters, particularly Chapters 15 through 17.
Such a polynomial basis is provided by the Lagrange interpolation process. For this we define the Lagrange polynomials, Lj (x), which are polynomials of degree n that satisfy
it satisfies the interpolation conditions because
Example 11.2. Let us use the same three data pairs of Example 11.1, namely, (1, 1), (2, 3) and (4, 3), to demonstrate the construction of Lagrange polynomials. To make L0 (x) vanish at x = 2 and x = 4 we write
L0(x) = a(x − 2)(x − 4).
Requiring L0(1) = 1 then determines a(1 − 2)(1 − 4) = 1, i.e., a = 13 , and thus
L0(x) = 13(x − 2)(x − 4).
Similarly we determine that
L1(x) = −12(x − 1)(x − 4), L2(x) = 16(x − 1)(x − 2).
These Lagrange polynomials are depicted in Figure 11.3. We thus obtain the interpolant
p2(x)= y0(x−2)(x−4)− y1(x−1)(x−4)+ y2(x−1)(x−2) 326
= 31(x−2)(x−4)− 32(x−1)(x−4)+ 36(x−1)(x−2).
Despite the different form, this is precisely the same quadratic interpolant as the one in Ex-
ample 11.1, so in fact, p2(x) = (−2×2 + 12x − 7)/3. It is also easy to verify that here, too,
p2(3) = 11. All this should not come as a surprise; it’s an illustration of the uniqueness of 3
polynomial interpolation; see the theorem on page 429. 􏰫 Properties of Lagrange polynomials
What properties do Lagrange polynomials have? How do they look like?
In the general case where n + 1 data abscissae xi are specified, the Lagrange polynomials
uniquely exist, because they are really nothing other than polynomial interpolants for special
Given data yi at abscissae xi as before, the unique polynomial interpolant of degree at most n
can now be written as
Indeed, p is of degree at most n (being the linear combination of polynomials of degree n), and
yjLj(xi) = 0+···+0+yiLi(xi)+0+···+0 = yi.
0 i̸=j 1 i=j.
yj Lj (x).

Chapter 11: Polynomial Interpolation
−3 0123456
Figure 11.3. The quadratic Lagrange polynomials L0(x), L1(x) and L2(x) based on points x0 =1, x1 =2, x2 =4,usedinExample11.2.
data57. This is also straightforward to verify directly, by explicitly specifying
(x − x0)···(x − xj−1)(x − xj+1)···(x − xn) 􏰙n
(x − xi) (xj −xi).
clearly interpolates the 0-values at all data abscissae other than xj , and dividing by its value at xj normalizes the expression to yield Lj (xj ) = 1. Another picture of a Lagrange polynomial is provided in Figure 11.4.
The Lagrange polynomials form an ideally conditioned basis φj(x) = Lj(x) for all poly- nomials of degree at most n. In the case of Lagrange interpolation, the matrix elements in the system (11.1) on page 425 are φj (xi ) = Lj (xi ), and we see that what was a potentially problem- atic Vandermonde matrix for the monomial basis in Section 11.2 is now an identity matrix. Thus, the system (11.1) (which is never formed as such) yields the solution cj = yj , j = 1, . . . , n.
Note that Lj has n zeros and thus n − 1 extrema (alternating minima and maxima). Inciden- tally, Lj (xj ) = 1 need not be a maximum for Lj (x).
Lj(x)= (xj −x0)···(xj −xj−1)(xj −xj+1)···(xj −xn) =
Indeed, the polynomial of degree n, written in terms of its roots as
(x − x0)···(x − xj−1)(x − xj+1)···(x − xn),
57Foreachj,setyi ←1ifi=j,yi ←0ifi̸=j, i=0,1,…,n.
Quadratic Lagrange polynomials

Chapter 11: Polynomial Interpolation 433
−1 01234567
Figure 11.4. The Lagrange polynomial L2(x) for

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com