程序代写代做 computer architecture Fortran cache algorithm Lecture Notes to Accompany

Lecture Notes to Accompany
Scientific Computing
An Introductory Survey
Second Edition
by Michael T. Heath
Chapter 2
Systems of Linear Equations
Copyright ⃝c 2001. Reproduction permitted only for noncommercial, educational use in conjunction with the book.
1

Systems of Linear Equations
Given m × n matrix A and m-vector b, find unknown n-vector x satisfying
Ax = b
System of equations asks “Can b be expressed
as linear combination of columns of A?”
If so, coefficients of linear combination given
by components of solution vector x
Solution may or may not exist, and may or may
not be unique
For now, we consider only square case, m = n
2

Singularity and Nonsingularity
n × n matrix A is nonsingular if it has any of following equivalent properties:
1. Inverse of A, denoted by A−1, exists 2. det(A) ̸= 0
3. rank(A) = n
4. For any vector z̸=o, Az̸=o
3

Singularity and Nonsingularity, cont.
Solvability of Ax = b depends on whether A is singular or nonsingular
If A is nonsingular, then Ax = b has unique solution for any b
If A is singular, then number of solutions is determined by b
If A is singular and Ax = b, then A(x+γz) = b for any scalar γ, where Az = o and z ̸= o, so solution not unique
One solution: No solution: ∞ many solutions:
nonsingular singular singular
4

Geometric Interpretation
In two dimensions, each equation determines straight line in plane
Intersection point of two lines is solution
If two straight lines not parallel (nonsingular), then intersection point unique
If two straight lines parallel (singular), then lines either do not intersect (no solution) or else coincide (any point along line is solution)
In higher dimensions, each equation determines hyperplane. If matrix nonsingular, intersection of hyperplanes is unique solution
5

Example: Nonsingularity
2 × 2 system
2×1+3×2 = b1, 5×1+4×2 = b2,
or in matrix-vector notation
Ax=􏰍2 3􏰎􏰍x1􏰎=􏰍b1􏰎=b,
54×2 b2
is nonsingular regardless of value of b
For example, if b = [8 13]T, then x = [1 2]T is unique solution
6

Example: Singularity
2 × 2 system
Ax=􏰍2 3􏰎􏰍x1􏰎=􏰍b1􏰎=b
46×2 b2 is singular regardless of value of b
With b = [4 7]T, there is no solution
With b = [4 8]T, x = [γ (4−2γ)/3]T is solution for any real number γ
7

Vector Norms
Magnitude, modulus, or absolute value for scalars generalizes to norm for vectors
We will use only p-norms, defined by  􏰇n p1/p
∥x∥p =  |xi|  i=1
for integer p > 0 and n-vector x Important special cases:
• 1-norm: ∥x∥1 = 􏰆ni=1|xi|
• 2-norm: ∥x∥2 = 􏰔􏰆ni=1 |xi|2􏰕1/2 • ∞-norm: ∥x∥∞ = maxi |xi|
8

Vector Norms, continued
Drawing shows unit sphere in two dimensions for each of these norms:
1.5
x
−1.5
(−1.6, 1.2) . .
. . .
.. .

. . . . …. .
.. . . .. . .
. . . . . . . . . . . . . .2 . . . .
. . . . . . . . . .. . .
. . . . . . . . .
. . . .1 . . . . . . ..
.. . .. . . .. . . .. . . .. . . . . .
. .. . .. . .. . .. . . . . . . . . . . . . . . . . .. .. . . . . .. .. . . . . . . . . . . . . . . . . . .. . .. . .. . ..
. . . . . . . . . . . . . . .. ..
. .
.
.
.
.
..
..
. .
. .
. .
. .
. .
. .
. .
. . .. . .. . .. . .. . .. .
. .
. .
. .
. .
. .
. . .
. .. 1.5 . . .
. . .
. . .
. . . . . .
. . .. . . . . .. . . .
−1.5
Norms have following values for vector shown:
∥x∥1 = 2.8, ∥x∥2 = 2.0, ∥x∥∞ = 1.6
In general, for any vector x in Rn, ∥x∥1 ≥ ∥x∥2 ≥ ∥x∥∞
9

Properties of Vector Norms
For any vector norm,
1. ∥x∥>0 if x̸=o
2. ∥γx∥ = |γ| · ∥x∥ for any scalar γ
3. ∥x+y∥ ≤ ∥x∥+∥y∥ (triangle inequality) In more general treatment, these properties
taken as definition of vector norm
Useful variation on triangle inequality:
| ∥x∥ − ∥y∥ | ≤ ∥x − y∥
10

Matrix Norms
Matrix norm corresponding to given vector norm defined by
∥A∥ = max ∥Ax∥ x̸=o ∥x∥
Norm of matrix measures maximum stretching matrix does to any vector in given vector norm
Matrix norm corresponding to vector 1-norm is maximum absolute column sum,
n ∥A∥1 = max 􏰇 |aij|
j i=1
Matrix norm corresponding to vector ∞-norm
is maximum absolute row sum,
n ∥A∥∞ = max 􏰇 |aij|
i j=1
11

Properties of Matrix Norms
Any matrix norm satisfies:
1. ∥A∥>0 if A̸=O
2. ∥γA∥ = |γ| · ∥A∥ for any scalar γ 3. ∥A+B∥ ≤ ∥A∥+∥B∥
Matrix norms we have defined also satisfy 4. ∥AB∥≤∥A∥·∥B∥
5. ∥Ax∥ ≤ ∥A∥ · ∥x∥ for any vector x
12

Condition Number of Matrix
Condition number of square nonsingular matrix A defined by
cond(A) = ∥A∥ · ∥A−1∥
By convention, cond(A) = ∞ if A singular
condition number measures ratio of maximum stretching to maximum shrinking matrix does to any nonzero vectors
Large cond(A) means A nearly singular
13
Since
∥A∥·∥A ∥= max · min
−1 􏰂 ∥Ax∥􏰑 􏰂 ∥Ax∥􏰑−1 x̸=o ∥x∥ x̸=o ∥x∥
,

Properties of Condition Number
1. For any matrix A, cond(A) ≥ 1
2. For identity matrix, cond(I) = 1
3. For any matrix A and scalar γ, cond(γA) = cond(A)
4. For any diagonal matrix D = diag(di), cond(D) = (max |di|)/(min |di|)
14

Computing Condition Number
Definition of condition number involves matrix inverse, so nontrivial to compute
Computing condition number from definition would require much more work than computing solution whose accuracy to be assessed
In practice, condition number estimated inex- pensively as byproduct of solution process
Matrix norm ∥A∥ easily computed as maximum absolute column sum (or row sum, depending on norm used)
Estimating ∥A−1∥ at low cost more challenging 15

Computing Condition Number, cont.
From properties of norms, if Az = y, then ∥z∥ ≤ ∥A−1∥,
∥y∥
and bound achieved for optimally chosen y
Efficient condition estimators heuristically pick y with large ratio ∥z∥/∥y∥, yielding good esti- mate for ∥A−1∥
Good software packages for linear systems pro- vide efficient and reliable condition estimator
16

Error Bounds
Condition number yields error bound for com- puted solution to linear system
Let x be solution to Ax = b, and let xˆ be solution to Axˆ = b + ∆b
If ∆x=xˆ−x, then
b + ∆b = A(xˆ) = A(x + ∆x) = Ax + A∆x,
which leads to bound
∥∆x∥ ≤ cond(A)∥∆b∥
∥x∥ ∥b∥
for possible relative change in solution due to
relative change in right-hand side b
17

Error Bounds, continued
Similar result holds for relative change in matrix: if (A + E)xˆ = b, then
∥∆x∥ ≤ cond(A)∥E∥ ∥xˆ∥ ∥A∥
In two dimensions, uncertainty in intersection point of two lines depends on whether lines nearly parallel
… …
… … …
…..
. . . ………..
…… ……………….
… .
…. . …. .
. ……….. …
… . … ………………
… … …
. …. . ….
.
… … . . .
well-conditioned
ill-conditioned
18

Error Bounds, continued
If input data accurate to machine precision, then bound for relative error in computed solution given by
∥xˆ − x∥ ≤ cond(A) εmach ∥x∥
Computed solution loses about log10(cond(A)) decimal digits accuracy relative to accuracy of input
We will later see example using 3-digit preci- sion for problem with cond > 103, which yields no correct digits in solution
19

Caveats
1. Normwise analysis bounds relative error in largest components of solution; relative error in smaller components can be much larger
Componentwise error bounds can be obtained, but somewhat more complicated
2. Conditioning of system affected by scaling
Ill-conditioning can result from poor scaling as well as near singularity
Rescaling can help former, but not latter
20

Residual
Residual vector of approximate solution xˆ to linear system Ax = b defined by
r = b − A xˆ
In theory, if A is nonsingular, then ∥xˆ − x∥ = 0 if, and only if, ∥r∥ = 0, but they are not necessarily small simultaneously
Since
∥∆x∥ ≤ cond(A) ∥r∥ , ∥xˆ∥ ∥A∥ · ∥xˆ∥
small relative residual implies small relative error only if A well-conditioned
21

Residual, continued
If computed solution xˆ exactly satisfies ( A + E ) xˆ = b ,
then
∥r∥ ≤ ∥E∥, ∥A∥∥xˆ∥ ∥A∥
so large relative residual implies large backward error in matrix, and algorithm used to compute solution is unstable
Stable algorithm yields small relative residual regardless how ill-conditioned nonsingular sys- tem may be
22

Solving Linear Systems
To solve linear system, transform it into one whose solution is same but easier to compute
What type of transformation of linear system leaves solution unchanged?
We can premultiply (from left) both sides of linear system Ax = b by any nonsingular matrix M without affecting solution
Solution to MAx = Mb is given by
x = (MA)−1Mb = A−1M−1Mb = A−1b
23

Example: Permutations
Permutation matrix P has one 1 in each row and column and zeros elsewhere. P −1 = P T
Premultiplying both sides of system by permu- tation matrix, PAx = Pb, reorders rows, but solution x unchanged
Postmultiplying A by permutation matrix, APx = b, reorders columns, which permutes components of original solution:
x = (AP)−1b = P−1A−1b = PT(A−1b)
24

Example: Diagonal Scaling
Row scaling: Premultiplying both sides of system by nonsingular diagonal matrix D, DAx = Db, multiplies each row of matrix and right-hand side by corresponding diagonal entry of D, but solution x unchanged
Column scaling: postmultiplying A by D, ADx = b, multiplies each column of matrix by corresponding diagonal entry of D, which rescales original solution:
x = (AD)−1b = D−1A−1b
25

Triangular Linear Systems
Next question is, what type of linear system is easy to solve?
If one equation in system involves only one component of solution (i.e., only one entry in that row of matrix is nonzero), then that com- ponent can be computed by division
If another equation in system involves only one additional solution component, then by substi- tuting one known component into it, we can solve for other component
If this pattern continues, with only one new solution component per equation, then all com- ponents of solution can be computed in suc- cession.
System with this property called triangular 26

Triangular Matrices
Matrix is lower triangular if all entries above main diagonal are zero: aij = 0 for i < j Matrix is upper triangular if all entries below main diagonal are zero: aij = 0 for i > j
Any triangular matrix can be permuted into upper or lower triangular form by suitable row permutation
27

Forward- and Back-Substitution
Forward-substitution for lower triangular sys- tem Lx = b:
x1 = b1/l11,
 i−1 
xi = bi − 􏰇 lijxj/lii, i = 2,…,n
j=1
Back-substitution for upper triangular system Ux = b:
xn = bn/unn,  􏰇n 
xi = bi − uijxj/uii, j =i+1
i = n−1,…,1
28

Example: Triangular Linear System
 2 4 −2   x1   2   0 1 1   x 2  =  4  004×3 8
Last equation, 4×3 = 8, can be solved directly to obtain x3 = 2
x3 then substituted into second equation to obtain x2 = 2
Finally, both x3 and x2 substituted into first equation to obtain x1 = −1
29

Elimination
To transform general linear system into trian- gular form, need to replace selected nonzero entries of matrix by zeros
This can be accomplished by taking linear com- binations of rows
Consider 2-vector a = 􏰍 a1 􏰎 a2
If a1 ̸= 0, then
􏰍 1 0 􏰎 􏰍 a1 􏰎 = 􏰍 a1 􏰎
−a2/a1 1 a2 0
30

Elementary Elimination Matrices
More generally, can annihilate all entries below kth position in n-vector a by transformation Mka =
1 ··· 0 . .. .
0···0a1a1 ….. .
. . . 0 ··· 1
. .... 0 ··· 0 ak  ak
0 ··· −mk+1 1 ··· 0ak+1=0, . .. . . .. . .  .
. . . . .   .   .  where mi =ai/ak, i=k+1,…,n
Divisor ak, called pivot, must be nonzero
Matrix Mk, called elementary elimination ma- trix, adds multiple of row k to each subsequent row, with multipliers mi chosen so that result is zero
31
 .
0 ··· −mn 0 ··· 1 an 0

Elementary Elimination Matrices, cont.
Mk is unit lower triangular and nonsingular Mk =I−mkeTk, where
mk = [0,…,0,mk+1,…,mn]T and ek is kth column of identity matrix
M−1 = I + mkeT, which means M−1 = Lk kkk
same as Mk except signs of multipliers reversed If Mj, j > k, is another elementary elimination
matrix, with vector of multipliers mj, then MkMj = I − mkeTk − mjeTj + mkeTk mjeTj
= I − m k e Tk − m j e Tj ,
which means product is essentially “union,”
and similarly for product of inverses, LkLj 32

Example: Elementary Elim. Matrices
2 Ifa= 4,then
−2
1002 2
M1a = −2 1 0 4 = 0 101−2 0
10022 M2a = 0 1 0 4 = 4
Note that
0 1/2 1 −2 0
100 L1=M−1= 2 1 0
−1 0 1 100
L2=M−1=0 1 0 2
0 −1/2 1
1
33

Example Continued
Further note that
M 1 M 2 =  − 2 1 0 
100 1 1/2 1
100 L1L2= 2 1 0
−1 −1/2 1
34

Gaussian Elimination
To reduce general linear system Ax = b to upper triangular form, first choose M1, with a11 as pivot, to annihilate first column of A below first row
System becomes M1Ax = M1b, but solution unchanged
Next choose M2, using a22 as pivot, to an- nihilate second column of M1A below second row. System becomes M2M1Ax = M2M1b, but solution still unchanged
Process continues for each successive column until all subdiagonal entries have been zeroed
Resulting upper triangular linear system
MAx = Mn−1 ···M1Ax = Mn−1 ···M1b = Mb can be solved by back-substitution to obtain
solution to original linear system Ax = b 35

LU Factorization
Product LkLj unit lower triangular if k < j, so L = M−1 = M−1 ···M−1 = L1 ···Ln−1 1 n−1 unit lower triangular By design, U = MA upper triangular So A = LU, with L unit lower triangular and U upper triangular Thus, Ax = b becomes LUx = b, and can be solved by forward-substitution in lower triangu- lar system Ly = b, followed by back-substitution in upper triangular system Ux = y y = Mb, transformed right hand side in Gaus- sian elimination Gaussian elimination and LU factorization are two ways of expressing same solution process 36 Example: Gaussian Elimination Use Gaussian elimination to solve linear system  2 4 −2x1  2  4 9 − 3   x 2  =  8  −2−37x3 10 To annihilate subdiagonal entries of first col- umn of A, M1A =  1 0 0 2 4 −2 2 4 −2  − 2 1 0   4 9 − 3  =  0 1 1  1 0 1 −2 −3 7 0 1 5 1002 2 M 1 b =  − 2 1 0   8  =  4  1 0 1 10 12 37 Example Continued To annihilate subdiagonal entry of second col- umn of M1A, M2M1A = 1 0 02 4 −2 2 4 −2  0 1 0   0 1 1  =  0 1 1  0 −1 1 0 1 5 0 0 4 10022 M 2 M 1 b =  0 1 0   4  =  4  0 −1 1 12 8 We have reduced original system to equivalent upper triangular system  2 4 −2   x1   2   0 1 1   x 2  =  4  004x3 8 which can now be solved by back-substitution to obtain x = [−1 2 2]T 38 Example Continued To write out LU factorization explicitly, L = L1L2 = 100100 100  2 1 0   0 1 0  =  2 1 0  −1 0 1 0 1 1 −1 1 1 so that  2 4 −2  1 0 02 4 −2  4 9 −3= 2 1 00 1 1 −2 −3 7 −1 1 1 0 0 4 39 Row Interchanges Gaussian elimination breaks down if leading diagonal entry of remaining unreduced matrix is zero at any stage Solution easy: if diagonal entry is zero at stage k, then interchange row k with some subse- quent row having nonzero entry in column k and proceed as usual What if there is no nonzero on or below diag- onal in column k? Then nothing to do at this stage, so move on to next column This leaves zero on diagonal, so resulting upper triangular matrix U singular, but LU factoriza- tion can still be completed Subsequent back-substitution will fail, however, as it should for singular matrix 40 Partial Pivoting In principle, any nonzero value will do as pivot, but in practice choice should be made to min- imize error Should avoid amplifying previous rounding errors when multiplying remaining portion of matrix by elementary elimination matrix So multipliers should not exceed 1 in magni- tude, which can be accomplished by choosing entry of largest magnitude on or below diago- nal as pivot Such partial pivoting is essential in practice for numerically stable implementation of Gaussian elimination for general linear systems 41 LU with Partial Pivoting With partial pivoting, each Mk preceded by Pk, permutation interchanging rows to bring entry of largest magnitude into diagonal pivot position Still have MA = U, with U upper triangular, but now M = Mn−1Pn−1 · · · M1P1 M−1 still triangular in general sense, but be- cause of permutations, M −1 not necessarily lower triangular, but still denoted by L Alternatively, can write PA = LU, where P = Pn−1 ···P1 permutes rows of A into order determined by partial pivoting, and now L really is lower triangular 42 Complete Pivoting Complete pivoting is more exhaustive strategy where largest entry in entire remaining unre- duced submatrix is permuted into diagonal pivot position Requires interchanging columns as well as rows, leading to factorization PAQ = LU, with L unit lower triangular, U upper triangu- lar, and P and Q permutations Numerical stability of complete pivoting the- oretically superior, but pivot search more ex- pensive than partial pivoting Numerical stability of partial pivoting more than adequate in practice, so almost always used in solving linear systems by Gaussian elimination 43 Example: Pivoting Need for pivoting has nothing to do with whether matrix is singular or nearly singular For example, A=􏰍0 1􏰎 10 is nonsingular yet has no LU factorization un- less rows interchanged, whereas A=􏰍1 1􏰎 11 is singular yet has LU factorization 44 Example: Small Pivots To illustrate effect of small pivots, consider A=􏰍ε 1􏰎, 11 where ε is positive number smaller than εmach If rows not interchanged, then pivot is ε and multiplier is −1/ε, so M=􏰍10􏰎,L=􏰍10􏰎, −1/ε 1 1/ε 1 U=􏰍ε1􏰎=􏰍ε1􏰎 0 1−1/ε 0 −1/ε in floating-point arithmetic. But then LU=􏰍10􏰎􏰍ε 1􏰎=􏰍ε1􏰎̸=A 1/ε 1 0 −1/ε 1 0 45 Example Continued Using small pivot, and correspondingly large multiplier, has caused unrecoverable loss of in- formation in transformed matrix If rows interchanged, then pivot is 1 and mul- tiplier is −ε, so M=􏰍1 0􏰎, L=􏰍1 0􏰎, −ε1 ε1 U=􏰍1 1 􏰎=􏰍1 1􏰎 01−ε01 in floating-point arithmetic Thus, LU=􏰍1 0􏰎􏰍1 1􏰎=􏰍1 1􏰎, ε101 ε1 which is correct after permutation 46 Pivoting, continued Although pivoting generally required for sta- bility of Gaussian elimination, pivoting not re- quired for some important classes of matrices: • Diagonally dominant: n 􏰇 |aij| < |ajj|, j = 1,...,n i=1, i̸=j • Symmetric positive definite: A=AT and xTAx>0 for all x̸=o
47

Residual
Recall that residual r = b − Axˆ satisfies
∥r∥ ≤ ∥E∥, ∥A∥∥xˆ∥ ∥A∥
where E is backward error in matrix A
How large is ∥E∥ likely to be in practice?
For LU factorization by Gaussian elimination, ∥E∥ ≤ ρ n εmach,
∥A∥
where growth factor ρ is ratio of largest entry
of U to largest entry of A
Without pivoting, ρ can be arbitrarily large, so Gaussian elimination without pivoting is unsta- ble
With partial pivoting, ρ can still be as large as 2n−1, but such behavior extremely rare
48

Residual, continued
There is little or no growth in practice, so ∥E∥ ≈ n εmach,
∥A∥
which means Gaussian elimination with partial pivoting yields small relative residual regardless how ill-conditioned system is
Thus, small relative residual does not neces- sarily imply computed solution close to “true” solution unless system is well-conditioned
Complete pivoting yields even smaller growth factor, but additional margin of stability usually not worth extra cost
49

Example: Small Residual
Using 3-digit decimal arithmetic to solve 􏰍0.641 0.242􏰎􏰍x1􏰎=􏰍0.883􏰎,
0.321 0.121 x2 0.442 Gaussian elimination with partial pivoting yields
triangular system
􏰍 0.641 0.242 􏰎 􏰍 x1 􏰎 = 􏰍 0.883
0 0.000242 x2 −0.000383 and back-substitution then gives solution
x = [ 0.782 1.58 ]T Exact residual for this solution is
r = b − Ax = 􏰍 −0.000622 􏰎 , −0.000202
􏰎 ,
which is as small as can expect using 3-digit arithmetic
50

Example Continued
But exact solution is
x = 􏰍1.00􏰎,
1.00
so error is almost as large as solution
Cause of this phenomenon is that matrix is nearly singular (cond > 4000)
Division that determines x2 is between two quantities that are both on order of rounding error, and hence result is essentially arbitrary
When arbitrary value for x2 is substituted into first equation, value for x1 is computed so that first equation is satisfied, yielding small resid- ual, but poor solution
51

Implementation of Gaussian Elimination
Gaussian elimination, or LU factorization, has general form of triple-nested loop
for
for
for
aij = aij − (aik/akk)akj end
end end
Indices i, j, and k of for loops can be taken in any order, for total of 3! = 6 different ways of arranging loops
These variations have different memory access patterns, which may cause their performance to vary widely, depending on architectural fea- tures such as cache, paging, etc.
52

Uniqueness of LU Factorization
Despite variations in computing it, LU factor- ization unique up to diagonal scaling of factors
Provided row pivot sequence is same, if we have two LU factorizations PA = LU = LˆUˆ, then Lˆ−1L = UˆU−1 = D is both lower and upper triangular, hence diagonal
If both L and Lˆ unit lower triangular, then D must be identity matrix, so L = Lˆ and U = Uˆ
Uniqueness made explicit in LDU factorization PA = LDU, with L unit lower triangular, U unit upper triangular, and D diagonal
53

Storage Management
Elementary elimination matrices Mk, their in- verses Lk, and permutation matrices Pk used in formal description of factorization process are not formed explicitly in actual implementation
U overwrites upper triangle of A, multipliers in L overwrite strict lower triangle of A, and unit diagonal of L need not be stored
Row interchanges usually not done explicitly; auxiliary integer vector keeps track of row order in original locations
54

Complexity of Solving Linear Systems
LU factorization requires about n3/3 floating- point multiplications and similar number of ad- ditions
Forward- and back-substitution for single right- hand-side vector together require about n2 mul- tiplications and similar number of additions
Can also solve linear system by matrix inver- sion: x = A−1b
Computing A−1 tantamount to solving n lin- ear systems, requiring LU factorization of A followed by n forward- and back-substitutions, one for each column of identity matrix
Operation count for inversion is about n3, three times as expensive as LU factorization
55

Inversion vs Factorization
Even with many right-hand sides b, inversion never overcomes higher initial cost, since each matrix-vector multiplication A−1b requires n2 operations, similar to cost of forward- and back- substitution
Inversion gives less accurate answer. Simple example: solving system 3x = 18 by division gives x = 18/3 = 6, but inversion gives x = 3−1 × 18 = 0.333 × 18 = 5.99 using 3-digit arithmetic
Matrix inverses often occur as convenient no- tation in formulas, but explicit inverse rarely required to implement such formulas
For example, product A−1B should be com- puted by LU factorization of A, followed by forward- and back-substitutions using each col- umn of B
56

Gauss-Jordan Elimination
In Gauss-Jordan elimination, matrix reduced to diagonal rather than triangular form
Row combinations used to annihilate entries above as well as below diagonal
Elimination matrix used for given column vec- tor a of form
1···0−m1 0···0a10 ………. .
……... 0 ··· 1 −mk−1 0 ··· 0ak−1   0 
0 ··· 0 1 0 ··· 0 −m
0 ··· 0 ak =ak, 1 ··· 0a  0
 k+1 . … . .
k+1   . … . .   . 
0 ··· 0 −mn
where mi = ai/ak, i = 1,…,n
0 ··· 1 an 0
57

Gauss-Jordan Elimination, cont.
Gauss-Jordan elimination requires about n3/2 multiplications and similar number of additions, 50% more expensive than LU factorization
During elimination phase, same row operations also applied to right-hand-side vector (or vec- tors) of system of linear equations
Once matrix in diagonal form, components of solution computed by dividing each entry of transformed right-hand-side by corresponding diagonal entry of matrix
Latter requires only n divisions, but not enough cheaper to offset more costly elimination phase
58

Solving Modified Problems
If right-hand side of linear system changes but matrix does not, then LU factorization need not be repeated to solve new system
Substantial savings in work, since additional triangular solutions cost only O(n2) work, in contrast to O(n3) cost of factorization
Sometimes refactorization can be avoided even when matrix does change
Sherman-Morrison formula gives inverse of ma- trix resulting from rank-one change to matrix whose inverse is already known:
(A−uvT )−1 = A−1+A−1u(1−vT A−1u)−1vT A−1, where u and v are n-vectors
Evaluation of formula requires O(n2) work (for matrix-vector multiplications) rather than O(n3) work required for inversion
59

Solving Modified Problems, cont.
To solve linear system (A − uvT )x = b with new matrix, use formula to obtain
x = (A−uvT)−1b
= A−1b + A−1u(1 − vT A−1u)−1vT A−1b,
which can be implemented by steps
1. Solve Az=u for z, so z=A−1u
2. Solve Ay=b for y, so y=A−1b
3. Compute x = y + ((vTy)/(1 − vTz))z
If A already factored, procedure requires only triangular solutions and inner products, so only O(n2) work and no explicit inverses
60

Example: Rank-1 Updating of Solution
Consider rank-one modification
 2 4 −2x1  2  4 9 − 3   x 2  =  8  −2−17×3 10
(with 3, 2 entry changed) of system whose LU factorization was computed in earlier example
One way to choose update vectors: 0 0
u =  0  and v =  1  , −2 0
so matrix of modified system is A − uvT
61

Example Continued
Using LU factorization of A to solve Az = u and Ay = b,
−3/2 −1 z =  1/2  and y =  2 
−1/2 2 Final step computes updated solution
x=y+ vTy z= 1 − vT z
−1 2 −3/2 −7  2  + 1 − 1 / 2  1 / 2  =  4 
2 −1/2 0
We have thus computed solution to modified
system without factoring modified matrix 62

Scaling Linear Systems
In principle, solution to linear system unaffected by diagonal scaling of matrix and right-hand- side vector
In practice, scaling affects both conditioning and selection of pivots in Gaussian elimina- tion, which in turn affect numerical accuracy in finite-precision arithmetic
Usually best if all entries (or uncertainties in entries) of matrix have about same size
Sometimes obvious how to accomplish this by choice of measurement units for variables, but there is no foolproof method for doing so in general
Scaling can introduce rounding errors if not done carefully
63

Example: Scaling
Linear system
􏰍1 0􏰎􏰍x1􏰎=􏰍1􏰎
0εx2 ε
has condition number 1/ε, so ill-conditioned if
ε small
If second row multiplied by 1/ε, then system
becomes perfectly well-conditioned
Apparent ill-conditioning due purely to poor scaling
Much less obvious how to correct poor scaling in general
64

Iterative Refinement
Given approximate solution x0 to linear system Ax = b, compute residual
r0 = b − Ax0
Now solve linear system Az0 = r0 and take
x1 = x0 + z0
as new and “better” approximate solution, since
Ax1 = A(x0 +z0) = Ax0 +Az0 = (b−r0)+r0=b
Process can be repeated to refine solution suc- cessively until convergence, potentially produc- ing solution accurate to full machine precision
65

Iterative Refinement, continued
Iterative refinement requires double storage, since both original matrix and LU factorization required
Due to cancellation, residual usually must be computed with higher precision for iterative re- finement to produce meaningful improvement
For these reasons, iterative improvement often impractical to use routinely, but can still be useful in some circumstances
66

Special Types of Linear Systems
Work and storage can often be saved in solving linear system if matrix has special properties
Examples include:
• Symmetric: A=AT, aij =aji for all i, j
• Positive definite: xT Ax > 0 for all x ̸= o
• Band: aij =0 for all |i−j|>β, where β is bandwidth of A
• Sparse: most entries of A are zero
67

Symmetric Positive Definite Matrices
If A is symmetric and positive definite, then LU factorization can be arranged so that
U =LT, that is, A=LLT,
where L is lower triangular with positive diag-
onal entries
Algorithm for computing Cholesky factoriza- tion derived by equating corresponding entries of A and LLT and generating them in correct order
In 2 × 2 case, for example,
􏰍a11 a21􏰎=􏰍l11 0 􏰎􏰍l11 l21􏰎, a21 a22 l21 l22 0 l22
which implies
l11 = √a11, l21 = a21/l11, l22 = 􏰒a22 − l21
68

Cholesky Factorization
One way to write resulting general algorithm, in which Cholesky factor L overwrites original matrix A:
for j = 1 to n
for k = 1 to j − 1
for i = j to n
aij = aij − aik · ajk
end end
ajj = √ajj
for k = j + 1 to n
akj = akj/ajj end
end
69

Cholesky Factorization, continued
Features of Cholesky algorithm symmetric pos- itive definite matrices:
• All n square roots are of positive numbers, so algorithm well defined
• No pivoting required for numerical stability
• Only lower triangle of A accessed, and hence
upper triangular portion need not be stored
• Only n3/6 multiplications and similar num- ber of additions required
70

Symmetric Indefinite Systems
For symmetric indefinite A, Cholesky factor- ization not applicable, and some form of piv- oting generally required for numerical stability
Factorization of form
PAPT =LDLT,
with L unit lower triangular and D either tridi- agonal or block diagonal with 1 × 1 and 2 × 2 diagonal blocks, can be computed stably using symmetric pivoting strategy
In either case, cost comparable to Cholesky factorization
71

Band Matrices
Gaussian elimination for band matrices differs little from general case — only ranges of loops change
Typically store matrix in array by diagonals to avoid storing zero entries
If pivoting required for numerical stability, band- width can grow (but no more than double)
General purpose solver for arbitrary bandwidth similar to code for Gaussian elimination for general matrices
For fixed small bandwidth, band solver can be extremely simple, especially if pivoting not re- quired for stability
72

Tridiagonal Matrices
Consider tridiagonal matrix, for example
b1 c1 0 ··· 0 ..
a2 b2 c2 .. .   .. .. .. 
A=0 . . . 0 ….a b c
 n−1 n−1 n−1  0 ··· 0 an bn
If pivoting not required for stability, then Gaus- sian elimination reduces to
d1 = b1
for i = 2 to n
mi = ai/di−1
di = bi − mici−1 end
73

Tridiagonal Matrices, continued
LU factorization of A given by 10······0
..
 m2 1 . . . 
 .. .. .. . L=0 . . ..,
 . … m 1 0  n−1 
0 ··· 0 mn 1 d1 c1 0 ··· 0
.. 0d2c2 .. .
…….  U=. . . . 0
. …dc  n−1 n−1 
0 ··· ··· 0 dn
74

General Band Matrices
In general, band system of bandwidth β re- quires O(βn) storage and factorization requires O(β2n) work
Compared with full system, savings substantial if β ≪ n
75

Iterative Methods for Linear Systems
Gaussian elimination is direct method for solv- ing linear system, producing exact solution in finite number of steps (in exact arithmetic)
Iterative methods begin with initial guess for solution and successively improve it until de- sired accuracy attained
In theory, might take infinite number of iter- ations to converge to exact solution, but in practice terminate iterations when residual as small as desired
For some types of problems, iterative methods have significant advantages over direct meth- ods
We will study specific iterative methods later when we consider solution of partial differential equations
76

LINPACK and LAPACK
LINPACK is software package for solving wide va- riety of systems of linear equations, both gen- eral dense systems and special systems, such as symmetric or banded
Solving linear systems of such fundamental im- portance in scientific computing that LINPACK has become standard benchmark for compar- ing performance of computers
LAPACK is more recent replacement for LINPACK featuring higher performance on modern com- puter architectures, including some parallel com- puters
Both LINPACK and LAPACK available from Netlib 77

Basic Linear Algebra Subprograms
High-level routines in LINPACK and LAPACK based on lower-level Basic Linear Algebra Subpro- grams (BLAS)
BLAS encapsulate basic operations on vectors and matrices so they can be optimized for given computer architecture while high-level routines that call them remain portable
Generic Fortran versions of BLAS available from Netlib, and many computer vendors provide custom versions optimized for their particular systems
78

Level Work Examples
Function
Scalar × vector + vector Inner product
Euclidean vector norm Matrix-vector product Triangular solution Rank-one update Matrix-matrix product Multiple triang. solutions Rank-k update
1
2
3
O(n) saxpy sdot
snrm2 O(n2) sgemv strsv
sger O(n3) sgemm strsm ssyrk
Examples of BLAS
Level-3 BLAS have more opportunity for data reuse, and hence higher performance, because they perform more operations per data item than lower-level BLAS
79