A First Course in Numerical Methods
Uri M. Greif February 10, 2022
Copyright By PowCoder代写 加微信 powcoder
1 Numerical Algorithms 1
1.1 Scientificcomputing………………………… 1
1.2 Numericalalgorithmsanderrors ………………….. 2
1.3 Algorithmandproblemproperties………………….. 10
1.4 Floatingpointsystems……………………….. 15
1.5 Roundofferrors ………………………….. 20
1.6 Exercises……………………………… 25
1.7 Additionalnotes………………………….. 34
2 Nonlinear Equations in One Variable 37
2.1 Solvingnonlinearequations…………………….. 37
2.2 Bisectionmethod …………………………. 41
2.3 Newton’smethodandvariants……………………. 44
2.4 SpeedofConvergence……………………….. 49
2.5 Fixed-pointiteration………………………… 53
2.6 Minimizingafunctioninonevariable………………… 58
2.7 Exercises……………………………… 61
2.8 Additionalnotes………………………….. 69
3 Linear Algebra Background 71
3.1 Reviewofsomebasicconcepts …………………… 72
3.2 Vectorandmatrixnorms………………………. 81
3.3 Specialclassesofmatrices……………………… 85
3.4 Singularvalues…………………………… 88
3.5 Examples……………………………… 90
3.6 Exercises……………………………… 98
3.7 Additionalnotes………………………….. 103
4 Linear Systems: Direct Methods 105
4.1 Gaussianeliminationandbackwardsubstitution . . . . . . . . . . . . . . . 106
4.2 LUdecomposition…………………………. 113
4.3 Pivotingstrategies…………………………. 117
4.4 TheCholeskydecomposition …………………….126
4.5 Estimatingerrorsandtheconditionnumber. . . . . . . . . . . . . . . . . . 131
4.6 Exercises………….
4.7 Additionalnotes………
5 Linear Systems: Sparse Direct Solvers
5.1 Banded matrices . . . . . . . . .
………………….. 137 ………………….. 144
. . . . . . . . . . . . . . . . . . . . . . . 146
5.2 Sparsematrices …………………………..148
5.3 SparseGaussianelimination…………………….. 152
5.4 Orderingstrategies…………………………. 161
5.5 Exercises……………………………… 169
5.6 Additionalnotes………………………….. 171
6 Linear Least-Squares Problems 173
6.1 Least-squaresproblemsandthenormalequations . . . . . . . . . . . . . . 174
6.2 Orthogonal transformations and QR decomposition . . . . . . . . . . . . . 184
6.3 AlgorithmsforcomputingtheQRdecomposition . . . . . . . . . . . . . . 189
6.4 SVDandtruncatedSVDmethods………………….. 198
6.5 Weightedleast-squaresproblems …………………..202
6.6 The linear Kalman filter . . . .
6.7 Exercises…………
6.8 Additionalnotes……..
7 Linear Systems: Iterative Methods
7.1 The need for iterative methods
7.2 Stationaryiterationandrelaxationmethods . . . . . . . . . . . . . . . . . . 223
7.3 Convergenceofstationarymethods…………………. 229
7.4 Gradientdescentmethods……………………… 232
7.5 Conjugategradientmethod ……………………..238
7.6 Preonditioningtechniques………………………251
7.7 *Krylovspacemethods ……………………….264
7.8 *Multigridmethods …………………………269
7.9 Exercises……………………………… 279
7.10 Additionalnotes………………………….. 280
8 Eigenvalues and Singular Values 281
8.1 Thepowermethodandvariants…………………… 282
8.2 Singularvaluedecomposition……………………. 291
8.3 General methods for computing eigenvalues and singular values . . . . . . . 296
8.4 Exercises……………………………… 304
8.5 Additionalnotes………………………….. 307
9 Unconstrained Optimization and Nonlinear Systems 309
9.1 Newton’smethodfornonlinearsystems……………….. 310
9.2 Unconstrained optimization: optimality and convexity . . . . . . . . . . . . 317
…………………… 204 …………………… 210 …………………… 216
…………………… 218
9.3 Basicmethods……….
9.4 Secant and BFGS methods . . .
9.5 Nonlinear least squares methods
9.6 Firstordermethods …………………………343
9.7 Derivative-freemethods………………………. 349
9.8 Methodsforlargeproblems…………………….. 355
9.9 Exercises……………………………… 356
9.10 Additionalnotes………………………….. 366
10 Constrained Optimization 367
10.1 Constrainedoptimization ………………………368
10.2 Generalpreparationandmethodoverview . . . . . . . . . . . . . . . . . .376
10.3 Linearprogramming:anactivesetmethod . . . . . . . . . . . . . . . . . .381
………………….. 323 ………………….. 331 ………………….. 337
10.4 Linearprogramming:aprimal-dualalgorithm . . . . . . . . . . . . . . . .388
10.5 Quadratic programming and sequential quadratic programming . . . . . . . 396
10.6 Penalty, barrier and augmented Lagrangian methods . . . . . . . . . . . . . 396
10.7 Convex low-smoothness unconstrained optimization . . . . . . . . . . . . . 404
10.8 Largeproblems …………………………..408
10.9 Exercises………………………………410
10.10 Additionalnotes…………………………..415
11 Polynomial Interpolation 417
11.1 Generalapproximationandinterpolation . . . . . . . . . . . . . . . . . . .417
11.2 Monomialinterpolation ……………………….420
11.3 Lagrangeinterpolation………………………..424
11.4 DivideddifferencesandNewton’sform………………..427
11.5 Theerrorinpolynomialinterpolation…………………436
11.6 Chebyshevinterpolation……………………….438
11.7 Interpolatingalsoderivativevalues ………………….440
11.8 Exercises……………………………… 445
11.9 Additionalnotes………………………….. 452
12 Piecewise Polynomial Interpolation 453
12.1 Thecaseforpiecewisepolynomialinterpolation . . . . . . . . . . . . . . . 453
12.2 BrokenlineandpiecewiseHermiteinterpolation . . . . . . . . . . . . . . . 455
12.3 Cubicsplineinterpolation……………………… 459
12.4 HatfunctionsandB-splines…………………….. 467
12.5 Parametriccurves ………………………….472
12.6 *Multi-dimensionalinterpolation …………………..475
12.7 Exercises……………………………… 482
12.8 Additionalnotes………………………….. 486
13.1 Continuousleastsquaresapproximation ……………….488
13.2 Orthogonalbasisfunctions ……………………..492
13.3 Weightedleastsquares……………………….. 495
13.4 Chebyshevpolynomials………………………. 499
13.5 Exercises……………………………… 502
13.6 Additionalnotes………………………….. 504
14 Fourier Transform 507
14.1 TheFouriertransform……………………….. 507
14.2 Discrete Fourier transform and trigonometric interpolation . . . . . . . . . . 512
14.3 FastFouriertransform……………………….. 520
14.4 Exercises……………………………… 529
14.5 Additionalnotes………………………….. 530
15 Numerical Differentiation 533
15.1 DerivingformulasusingTaylorseries………………… 533
15.2 Richardsonextrapolation ………………………538
15.3 Deriving formulas using Lagrange polynomial interpolation .
15.4 Roundoff and data errors in numerical differentiation . . . .
15.5 Differentiation matrices and global derivative approximation
15.6 Exercises……………………….
……..539 ……..544 ……..550 ……..554
iv Contents
15.7 Additional notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
16 Numerical Integration 561
16.1 Basicquadraturealgorithms…………………….. 562
16.2 Compositenumericalintegration …………………..566
16.3 Gaussianquadrature…………………………574
16.4 Adaptivequadrature…………………………582
16.5 Rombergintegration…………………………589
16.6 *Multi-dimensionalintegration ……………………592
16.7 Exercises………
16.8 Additional notes . . . . .
17 Ordinary Differential Equations
……………………… 594 ……………………… 598
17.1 Initialvalueordinarydifferentialequations . . . . . . . . . . . . . . . . . . 602
17.2 Euler’smethod…………………………… 606
17.3 Runge–Kuttamethods……………………….. 615
17.4 Multistepmethods…………………………. 623
17.5 Absolutestabilityandstiffness ……………………630
17.6 Errorcontrolandestimation……………………..639
17.7 Geometric integration and highly oscillatory problems . . . . . . . . . . . . 643
17.8 BoundaryvalueODEs………………………..652
17.9 Exercises………………………………655
17.10 Additionalnotes…………………………..662
18 Partial Differential Equations 667
18.1 BasicPDEclassification………………………. 668
18.2 Thefiniteelementmethod……………………… 672
18.3 Methodsforhyperbolicproblems………………….. 676
18.4 Spectralmethods………………………….. 685
18.5 Exercises……………………………… 687
18.6 Additionalnotes………………………….. 690
Bibliography Index
List of Figures
1.1 Scientificcomputing………………………….. 3
1.2 A simple instance of numerical differentiation: the tangent f ′ (x0 ) is approxi-
matedbythechord(f(x0+h)−f(x0))/h. ……………… 8
1.3 The combined effect of discretization and roundoff errors. The solid curve
interpolates the computed values of |f′(x0) − f(x0+h)−f(x0)| for f(x) = h
sin(x), x0 = 1.2. Also shown in dash-dot style is a straight line depicting
thediscretizationerrorwithoutroundofferror. . . . . . . . . . . . . . . . .
1.4 An ill-conditioned problem of computing output values y given in terms of input values x by y = f(x): when the input x is slightly perturbed to x ̄, the result y ̄ = f(x ̄) is far from y. If the problem were well-conditioned, we would be expecting the distance between y and y ̄ to be more comparable in
magnitudetothedistancebetweenxandx ̄. ……………… 13
1.5 An instance of a stable algorithm for computing y = f(x): the output y ̄ is the exact result, y ̄ = f(x ̄), for a slightly perturbed input, i.e., x ̄ which is close to the input x. Thus, if the algorithm is stable and the problem is well-
conditioned, then the computed result y ̄ is close to the exact y. . . . . . . . . 15
1.6 A double word (64 bits) in the standard floating point system. The blue bit is for sign, the magenta bits store the exponent (11 bits) and the green ones (52
bits)areforthefraction. ……………………….. 17
1.7 Picture of the floating point system described in Example 1.7. . . . . . . . .
1.8 Theexactvaluesofy0,y1,…,y19 inExample1.11. . . . . . . . . . . . . .
1.9 Computed values (in red) for y0 , y1 , . . . , y17 in Example 1.11. For compari-
son, the exact values (in blue) are given as well. Note the different y-scale in Figures 1.9 and 1.8. Approximate and exact values of yn appear to the eye to be close up to n = 15. But from there on rapid divergence is clear. Note the valuesofthecomputedy18 andy19 giveninthetext. . . . . . . . . . . . . .
2.1 Graphs of three functions and the corresponding real roots (if there are any):
(i) f1(x) = sin(x) on [0, 4π], (ii) f2(x) = x3 − 30×2 + 2552 on [0, 20], and (iii)f3(x)=10cosh(x/4)−xon[−10,10]………………. 38
2.2 The function f (x) = x + ln(x) on the interval [0.3, 1] with the crossing point fromnegativetopositivemarked. …………………… 43
2.3 Newton’s method: the next iterate is the x-intercept of the tangent line to f at thecurrentiterate. ………………………….. 45
2.4 Thefunctionf(x)=2cosh(x/4)−x. ………………… 46
2.5 Secant method: the next iterate is the finite difference approximation of the x-intercept of the tangent line to f using the two most recent iterates. . . . . . 48
2.6 Fixed point iteration for x = e−x, starting from x0 = 1. This yields x1 = e−1, x2=e−e−1,….Convergenceisapparent. ………………. 56
List of Figures
3.4 3.5 3.6
4.1 4.2 4.3
4.4 5.1 5.2
Thefunctionsxand2cosh(x/4)meetattwolocations. . . . . . . . . . . . . 57 Convergence history of the Newton, secant, and the fixed-point iteration of Example2.10. ……………………………. 59 Graphofananonymousfunction,Exercise2.6. . . . . . . . . . . . . . . . . . 64
Intersection of two straight lines: a11 x1 + a12 x2 = b1 and a21 x1 + a22 x2 = b2 . 73 Eigenvalues in the complex plane, Example 3.5. Note that the complex ones appear in pairs: if λ is an eigenvalue, then so is λ ̄. Also, here all eigenvalues havenonpositiverealparts. ……………………… 79 The “unit circle” according to the three norms, l1 , l2 , and l∞ . Note that the
diamond is contained in the circle which in turn is contained in the square.
The Cartesian coordinate system as a set of orthonormal vectors. . . . . .
Singularvaluedecomposition:A =U Σ VT . . . . . . . . m×n m×m m×n n×n
Data fitting by cubic polynomial interpolation using the solution of a linear systemforthepolynomial’scoefficients. ……………….. 92 Data fitting by a straight line, often called linear regression. . . . . . . . . . . 93 A discrete mesh for approximating a function and its second derivative in Ex- ample3.19. …………………………….. 94 Recovering a function v satisfying v(0) = v(1) = 0 from its second deriva- tive.HereN=507. …………………………. 95 Example 3.20: a point cloud representing (a) a surface in three-dimensional space,and(b)togetherwithitsunsignednormals. . . . . . . . . . . . . . .
Gaussianeliminationforthecasen=4. ………………. LUdecomposition. ………………………… Progress in the computation of the lower triangular factor of the Cholesky de- composition. The part of the Cholesky factor G that has already been formed is marked in blue, red is the column being computed, and green marks the updatedsubmatrixthatremainstobefactored. . . . . . . . . . . . . . . . . Stretching the unit circle by a symmetric positive definite transformation. . .
. 112 . 114
. 129 . 137
The graph of A (left), the clique forming the elimination graph of A(1) (mid- dle),andthegraphofB(right),forExample5.1. . . . . . . . . . . . . . . .
A step in computing the Cholesky factor of a matrix using up-looking Cholesky.
Red marks the row of G that is currently being computed. Green indicates the
part of G that has already been computed; this part is accessed but not modi-
fied. The blue part is not accessed in this step. When the red row is computed
we look up above it to extract the information needed, hence the term ‘up- looking.’ ……………………………….154 Sparsity pattern of A (left) and its Cholesky factor G (right) in Example 5.5.
The red stars mark the filled entries, namely entries that are zero in the lower partofAandbecomenonzeroduringthefactorization. . . . . . . . . . . . Graph of the matrix of Example 5.5 (left), and the elimination graph after one
step of Cholesky decomposition (right). . . . . . . . . . . . . . . . . . . .
An elimination step in the matrix of Example 5.5. The green circles are entries
that are accessed during the elimination for entries (4, 2) and (5, 2). The red
circle is an entry that is generated as a result. Entries g42 and g52 in G will be nonzero because the corresponding entries in A (marked in black) are nonzero. Entryg54becomesnonzeroasaresult. …………………157
. 156 . 156
List of Figures vii
5.6 The elimination tree (left), applied to the 7 × 7 matrix of Example 5.5. The graphontherightisthepost-orderedtree.. . . . . . . . . . . . . . . . . . . .158
5.7 A matrix (left) and its Cholesky factor (right). A supernode is marked by the redlargercircles. …………………………..159
5.8 Elimination tree (left) and matrix elements that correspond to two 3 × 3 sub- matrices for which partial factorizations may proceed almost completely in parallel,asexplainedinExample5.8. ………………….160
5.9 Quotient graph of the matrix of Example 5.5 after three and four steps of elimination. In the fourth step the elements 1 and 2 are absorbed into element 4. The connection between nodes 5 and 6 is no longer needed because they arebothassociatedwiththeelementnode4. . . . . . . . . . . . . . . . . .
5.10 The permuted matrix of Example 5.5 after applying RCM ordering. . . . . .
5.11 Nesteddissectionorderingfora31×31grid. . . . . . . . . . . . . . . . .
5.12 The matrix A in lexicographic ordering (up left in blue), its nested dissection
ordering (up right, red), approximate minimum degree (bottom left, green) and
reverse Kee ordering (bottom right, black. . . . . . . . . . . . .
5.13 Sparsity pattern of the Cholesky factors of A (upper left), nested dissection (upperright),AMD(lowerleft),andRCM(lowerright). . . . . . . . . . . .
6.1 Matricesandvectorsandtheirdimensions. . . . . . . . . . . . . . . . . . .
6.2 Discreteleast-squaresapproximation. ………………….178
6.3 Linear regression curve (in blue) through green data. Here, m = 25, n = 2.
6.4 The first 5 best polynomial approximations to f(t) = cos(2πt) + 10(t − .5)5 sampled at 0 : .05 : 1. The data values appear as red circles. Clearly, the larger nthebetterthepolynomialpn−1 fitsthedata. . . . . . . . . . . . . . . . .
6.5 Gram-Schmit in action, Example 6.8: a1 and a2 are the columns of the original matrix A; q1 and q2 are orthonormal vectors that are are constructed through
theprocessandformthecolumnsofQ. …………………190 6.6 Givens rotation in action. A vector (x, y) is rotated by a degree θ so that the
rotatedvectorhasthesamelengthandliesonthex-axis. . . . . . . . . . . . 6.7 A Householder reflection in action. The vector P z is a reflection of z through aplanethatincludestheoriginandisperpendiculartou. . . . . . . . . . .
6.8 Householder reflection. Depicted is a 20 × 5 matrix A after 3 reflections. . .
6.9 Ariderbeingtrackedfromhighabove. …………………204
6.10 Measured position data for Example 6.18: (a) just the data; (b) including also a
broken-line interpolant (blue) and the unknown, noiseless smooth function we
used to synthesize the data (red). The interpolant looks unrealistically jerky becauseofthenoise. ………………………….205
6.11 Kalman filter and smoother results for Example 6.20: (a) filter; (b) smoother.
7.1 A2Dcross-sectionofa3Ddomainwithasquaregridadded. . . . . . . . .
7.2 A 2D grid with grid function values at its nodes, discretizing the unit square. The length of each edge of the small squares is the grid width h, while their union is a square with edge length 1. The locations of ui,j and those of its neighbors that are participating in the difference formula (7.1) are marked in
. 209 . 219
red. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.3 The sparsity pattern of A for Example 7.1 with N = 10. Note that there are nz=460nonzerosoutof10,000matrixlocations. . . . . . . . . . . . . .
7.4 Red-black ordering. First sweep simultaneously over the “black” points, then
. 222 overthe“red”. …………………………….227
. 163 . 165 . 166
. 167 . 168 . 175
List of Figures
7.5 7.6 7.7 7.8 7.9
7.13 7.14 7.15
8.4 8.5 8.6 8.7
9.1 9.2 9.3
Steepest descent applied to Ax = 0 in Example 7.9. Shown are the contour plotsofφandtheiterates.Theiterateszigzaguselessly. . . . . . . . . . . . Example 7.11, with N = 31: convergence behavior of steepest descent and gradient descent for the discretized Poisson equation. . . . . . . . . . . . . Example 7.11: Conjugate gradient applied to A2x = 0 where A2 is defined in Example 7.9. Show
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com